ProGamerGov
ProGamerGov
This Sphinx PR will fix this issue, if it gets merged: https://github.com/sphinx-doc/sphinx/pull/10766
@aobo-y Originally Captum’s loss functions [were setup similar]( https://github.com/pytorch/captum/pull/500#discussion_r515453337) to the simple class-like functions like [Lucid uses](https://github.com/tensorflow/lucid/blob/master/lucid/optvis/objectives.py). Upon review we then changed the losses to use classes [instead](https://github.com/pytorch/captum/pull/527). Ludwig (one...
This PR can be skipped for now.
@NarineK I removed `InputBaselineXGradient` from the rst file in: https://github.com/pytorch/captum/pull/985
The CLIP PRs are: #927, #943, #945, #961, #965, #966, #968
Using ImageMagick I figured out that the input image I was trying to use, had an sRGB color space instead of an RGB color space.
To solve this issue in [another project](https://github.com/ProGamerGov/Neural-Tools/commit/40a447267af19a61150d7b98738c498bb0b9f029), I used code like this: ``` import scipy.ndimage as spi spi.imread(org_content, mode="RGB").astype(float)/256 ``` to make sure that every input image was read as...
It's the layer, channels, and input that are affecting it, I think. Changing any of those should help.
I've got Places205 working with DeepDream in PyTorch, so I'll see if Places365 works just as well once I convert it.
@ad48hp Here's a random Places205 layer:  And a random Places365 layer:  It seems to work just as well for me.