ONNX for Model saving
ONNX provides an open standard for saving models (portable b/t tensorflow, pytorch, etc.).
Should this come into play in how Models are saved?
https://onnx.ai/
Perhaps we could provide a way of importing Models from various hubs and all models would be saved using ONNX. This would then allow for decoupling where the Model was pulled from and what kind of libraries the developer might use to work with those models.
https://huggingface.co/docs/transformers/serialization
This would have implications on how Models are saved. Perhaps it makes sense to save the models outside of a container image (i.e. not bundled with code).
I think this makes a lot of sense especially once we have our own serving layer
+1 - I didn't realize this emerging standard existed. Reading up a bit, it sounds like it makes for a better interop story when deploying.