Daniel Nouri
Daniel Nouri
I looked at the problem a little bit more, and understand the issue now better. The code that was removed in #228 had the same issue since it did not...
I tried the script and it failed with some weird recursion error.
OK just let me know if that's a joke or if it actually works. ;-)
I'll take another look next week. So far didn't have much luck.
@kungfujam Note that as per the original post, you can always do this: 1. `save_params_to` on the GPU machine 2. initialize an identical NeuralNet on the CPU machine 3. `load_params_from`...
This looks great! I'm not sure what to do with the int argument to `__getitem__` either. Does it make sense to just pass it on and do the same thing...
> We could implicitly shape back to the original dimensionality, but that would be an unexpected behavior differing from np.array. OK, let's raise a `ValueError` then? > My suggestion with...
We're [already](https://github.com/ottogroup/palladium/blob/master/setup.py#L59-L64) using [setuptools extras](https://packaging.python.org/tutorials/installing-packages/#installing-setuptools-extras) for our optional dependencies to `julia`, `rpy2`, and others. I like the idea of adding a S3 persister to Palladium. We can even put it...
Another way to deal with this is to move the normalization into a model wrapper (or "meta-estimator" in scikit-learn). A `NormalizeTarget` wrapper would normalize on the way in and out....
There's this utility called `palladium.interfaces.annotate` which is used by Palladium to store the model version along with the model pickle. It's a glorified way of sticking an attribute onto the...