Results 342 comments of François Chollet

A `.keras` model is not meant to be consumed by anything other than `keras.models.load_model()`. To use the TF-Lite converter, first create a TF SavedModel via `tf.saved_model.save(model)`.

I'm told the TFLite team have got TFLite export working internally for Keras Core at Google, but the code has not yet been released. Hopefully you'll get an update soon.

Triage notes: - Such an option should likely not be exposed to end users, so it doesn't need to be an argument. - Some layers may require `jit_compile` in order...

Understood. How about: - Add `DistributedEmbeddingOptimizer` which takes a base optimizer but is aware of distributed embedding variables? - In the base optimizer, add a line somewhere to error out...

> This could instead be handled with a per-variable "optimizer" rather than a special property. I like the generality, we can definitely add a generic variable attribute for this. Is...

IMO... - No changes to base trainer, and if possible base variable - A custom optimizer can handle the differentiated update based on the kind of variable we have (could...

Different backends have different ways of handling this, and there is no cross-backend solution in Keras 3 today. JAX has [`checkify`](https://jax.readthedocs.io/en/latest/debugging/checkify_guide.html), for instance. PyTorch will generally run eagerly so you...

What currently happens if you try to clone a model that contains shared embeddings?

> It clones like nothing happened and continues with the test. Is there a way to check if the layers still contain a shared embedding, or is it not even...

> In the source, there is also a PyDatasetEnqueuer class. Do I need this? Why is this here? Who is the target audience? Is the expectation of the Enquerer in...