François Chollet
François Chollet
With this approach, you should be able to write a custom training loop, right? Does that work?
One thing we could do is introduce a torch data iterator for `fit()` that moves the data to the default torch device.
Thanks for the suggestion. It looks very doable. We'd have to wait to see it gain traction before adding it to the core API though. Are you willing to work...
Thanks for the suggestions. Do these ops have any equivalent in JAX or in NumPy?
Try with `keras-nightly`, this error should be fixed already.
We should disable jit_compile in auto mode if backend is TF and there's a LSTM or GRU layer and cuDNN is usable for them
It looks like your dataset is yielding tensors that have no shape (i.e. not even a rank), and that isn't supported by augmentation layers. Fix: make sure your dataset yields...
Thanks for the suggestion! I think these wrappers do provide value to the community. > I'd be happy to help (as a scikit-learn maintainer) if y'all decide to add them...
Ok, that's a good argument. If you'd like to open a PR, we'll review it.
Did you take a look at the data on local disk and checked everything looked good?