keras
keras copied to clipboard
🗺️ Keras Development Roadmap
Here's an overview of the features we intend to work on in the near future.
Core Keras
Saving & export
- Implement saving support for sharded models (sharded weights files).
- Improve model export story:
- [Open for Contributions] Implement export to ONNX (from any backend)
- [Open for Contributions] Implement export to TensorRT (from any backend)
- [Open for Contributions] Write serving/export guide explaining the different servings options (SavedModel, ONNX, TensorRT, TorchServe, etc.)
Distribution
- Add PyTorch SPMD support in
keras.distribution. - Add TensorFlow SPMD (DTensor) support in
keras.distribution.
Performance
- Enable PyTorch compilation on any Keras model.
- Also make it possible to provide torch compilation arguments to Keras.
- [Open for Contributions] Add a profiler Callback. With any backend, this callback should be able to give you a report on the computation/time breakdown of your layers and your train step.
Modeling
- Add
tensor.atoperator. - Add sparse support to PyTorch backend.
- Increase coverage of sparse support in
keras.ops:- Reduction ops
- Boolean ops
Ecosystem
- [Open for Contributions] Finish implementing the MLX backend, in collaboration with Apple (see
mlxbranch). - Add OpenVINO inference-only backend, in collaboration with Intel.
KerasHub
Unify KerasCV and KerasNLP into a single Keras Models package.
-
does it mean that kerascv and kerasnlp will be merged together into one single package, like tensorflow model garden?
-
kerascv development is terribly slow, and lacks of resources. no release since february (0.8.2), may models doesn't have weight, i.e. vit, mit. Object detection models that is added (yolo-v8, retina-net, faster-rcnn) all are broken and doesn't have evaluation scripts to holds their validity. Generative model only stable diffusion, where many (too many impactfull models are available now).
-
like huggigface, why enigneer or researcher are not hired and reinformce into this package. Check this repo, the ownver can add any new models written in keras. so why not official keras team? Is it because the api desgin is too rich and complicated for contributor? Not to mention, many reviews are pending, contributor left at th end.
I concur with @pure-rgb's observation that the API design has become overly elaborate and intricate for contributors. Previously, it was straightforward and comprehensible, but with the introduction of the Backbone API, it has become more convoluted.
with the introduction of the Backbone API, it has become more convoluted.
The person who introduced such API design already left google. :p
@fchollet I've been going through the features you're planning to implement, and I'm particularly interested in contributing to KerasNLP. Specifically, I'm eager to get involved in the development of dynamic sequence length inference for PyTorch LLMs.
@kernel-loophole please open an issue on the KerasNLP repo if you'd like to contribute this feature!
okay thanks
I am a developer of tensorflow recommenders-addons and I now need to develop an all-to-all embedding layer for multi-GPU distributed training of recommendation models. The old tensorflow distributed strategy clearly did not meet this need. So the question is, should I develop on TF DTensor or Jax? Because it seems that Keras support for TF DTensor is not friendly. But Jax lacks the ability to online inference services and the functional components used by various recommendation algorithms. Also recommenders-addons has a lot of custom operators.
online inference services and the functional components used by various recommendation algorithms
@MoFHeka Can you elaborate on what you need here?
online inference services and the functional components used by various recommendation algorithms
@MoFHeka Can you elaborate on what you need here?
@jeffcarp If a third-party custom op (primitives) is used in Jax training, it will be difficult to convert it to saved_model for online inference. Jax2tf is not easy to use. Unlike TF custom op, it only needs to compile with TFServing or preload the dynamic link library.
You may not be very concerned about how DTensor or Jax will evolve in the future. But for now the large number of recommendation models are trained by Keras, I'm interested to hear what I think of these two frameworks as a Keras developer. After all, both frameworks have their own problems for us. Since one of our priorities is to be compatible with Keras APIs, I would like to know whether Keras currently prefers to do more development for DTensor or Jax and integrate their which features? Or which of the two is more worthy of support?
@MoFHeka Keras is focusing on JAX for distributed training.
If a third-party custom op (primitives) is used in Jax training, it will be difficult to convert it to saved_model for online inference
Can you elaborate on what ops you need, and what your current workarounds are?
cc @hertschuh who is working on recommenders.
Thank you for your reply. Here is the tensorflow recommenders addons which store and train dynamic shape embedding tables with fully functional hashtable. It’s designed for training ID feature without static hash map. https://github.com/tensorflow/recommenders-addons/blob/master/tensorflow_recommenders_addons/dynamic_embedding/core/ops/hkv_hashtable_ops.cc在 2024年7月12日,04:12,Jeff Carpenter @.***> 写道: @MoFHeka Keras is focusing on JAX for distributed training.
If a third-party custom op (primitives) is used in Jax training, it will be difficult to convert it to saved_model for online inference
Can you elaborate on what ops you need, and what your current workarounds are? cc @hertschuh who is working on recommenders.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
Dear all
It would make life so much easier if Keras 3 supported some of the newer optimizers e.g. Shampoo or Schedule free Adam.
Any chance this could go on the roadmap?
Thanks!
Dear all
It would make life so much easier if Keras 3 supported some of the newer optimizers e.g. Shampoo or Schedule free Adam.
Any chance this could go on the roadmap?
Thanks!
Hey i will be interested in contributing for this feature if the team allows me to!
Sure! Please take a crack at it and submit it for review when ready!