logging/monitoring
MLEM takes care of the initial step for deployment by handling packaging/building and serving the model. Once the model is deployed, the next step is monitoring the model. See this simple solution for monitoring for a nice example. Monitoring may include real-time logging, analysis (drift, shift, anomaly detection, etc.), alerting, and visualization.
There are lots of monitoring solutions, both ML-specific and generic. MLEM could enable something like mlem log/monitor to set up Grafana+Prometheus or another logging architecture on top of a MLEM service or as part of a MLEM deployment.
Note: I think it's out of scope for MLEM at the moment, but another aspect of monitoring is a persistent data store to save all production data so it can be debugged, analyzed, and used for training new models in the future. It seems like there are few standardized solutions for persisting production model data. Usually both the input data/features and the predictions need to be persisted, in a format that can be easily consumed by the DS team, and it should be easy to link predictions back to the data used to generate them.