serving
serving copied to clipboard
Feature Request: Relax volume constraint to support more volume types
In what area(s)?
/area API
/area autoscale /area build /area monitoring /area networking /area test-and-release
Describe the feature
We currently only allow for configmap and secret volumes to be mounted into the user container. This constraint is put in place as volumes are a source of state and can severely limit scaling. This feature request is to relax this constraint to allow a larger set of volumes to be mounted that work well with serverless functions.
I do not have a particular set of volume types in mind, but #4130 may be a good example.
To support PVCs, when scaling up/down apps, how could data be handled? Will knative take the work? Or app should handler data migration gracefully when scale up or scale down.
Would it be possible to add emptyDir as an allowed type as that would not have state, should not present a scaling problem, and have rw storage outside the docker overlay. Also having emptyDir and medium: Memory will allow users to create larger tmpfs mounts where write intensive operations can happen in memory and not disk, which could wear down SSDs on selfhosted instances and would be orders of magnitude faster. As the default size for /dev/shm is only 64M. One of the suggested workarounds for increasing that is actually using a emptyDir with medium Memory.
I suppose that can be solved by writing a custom mutatingadmissionwebhook.
I have a very valid use case using Odoo, which saves all generated attachments on a NFS when using multi-tenant deployments, and this is being used on an actual k8s deployment with istio.
Knative can make things easier for us, but we can't drop the NFS (that's not even a source of state for us). There should be someway to accomplish this. If its not an issue with k8s it shouldn't be a constraint using Knative. That NFS should not impact a Knative deployment at all.
@gustavovalverde Thanks for sharing your use-case. This is something that is on the radar of the API Working Group, but we do not have someone actively working on this right now.
The "Binding" pattern as talked about in https://docs.google.com/document/d/1t5WVrj2KQZ2u5s0LvIUtfHnSonBv5Vcv8Gl2k5NXrCQ/edit#heading=h.lnql658xmg9p could be a potential workaround to inject these into the deployment that Knative creates while we work on getting this issue resolved. See https://github.com/mattmoor/bindings for examples.
cc @mattmoor
@dgerd @mattmoor I'd really appreciate an example on how to use bindings for this use case. I'll test it and give the feedback here so others with the same restriction can use this workaround.
@dgerd and I spent some time discussing this idea before the holidays began. I think he wanted to try to PoC it. If not, then he and I should probably write up a design doc to capture our thoughts/discussion.
@mattmoor Do I read this correctly that i cannot use ReadWriteMany PVCs at all in a Knative service? I have a simple uploader service that needs to deposit data in an Azure files pvc volume. I understand the desire for statefulness but I don't see this as different from inserting data into a database. The "persistence" isn't in the pod in either case. Thanks for any insight. --jg
I don't think we've figured out how to allow this in a way that doesn't have pointy edges that folks will stick themselves on. I totally agree that the filesystem is a useful abstraction for write-many capable devices.
Bumping this issue because it something that most users I have met want to do.
I don't think we've figured out how to allow this in a way that doesn't have pointy edges that folks will stick themselves on.
True, but realistically we most likely will never be able to prevent users to shoot themselves in the foot. We have seen them bypass the limitations with WebHooks to inject sidecars, use the downward API and mount volumes anyway.
The binding pattern is really interesting but maybe too complicated for typical users who just want to have the Kn Pod Spec be 100% compatible with the k8s Pod Spec.
As an example to what JR said above, both Datadog and New Relic use Unix domain sockets to collect metrics and exposing that is going to be important to support customers using these systems. In case of Datadog, the predominant way of using it is to deploy it as a Daemonset to the cluster and have customers and utilize UDS to send metrics to the agent local to the node. Another alternate is to use host IP within the user code to send the metrics to the Daemonset, but in order to ensure that the metrics are sent to the host node and not a random node in the system, user has to use k8s downward API to feed the IP of the host to the revision, but that doesn't work either because we don't support k8s downward APIs.
Would love to get everyone's opinion on two things:
-
Can we extend the current list and support hostPath? While this could potentially have pointy edges, lack of this is going to be an adoption blocker for a large set of scenarios - especially ones that involve use of Deamonsets (very common in logging & monitoring scenarios).
-
Can build an extension point here and allow vendors to extend this default list with vendor specific additions? That way, Knative can still focus on a set of core scenarios and vendors will be responsible for supporting & maintaining their additions to this list.
True, but realistically we most likely will never be able to prevent users to shoot themselves in the foot. We have seen them bypass the limitations with WebHooks to inject sidecars, use the downward API and mount volumes anyway.
Yep, I agree. I think my prior comment is likely easily misinterpreted as "No, we need to solve this problem", but my intent was simply to convey that this isn't a slam dunk, there are downsides/gotchas that we'll have to be sure to clearly document.
The binding pattern is really interesting but maybe too complicated for typical users who just want to have the Kn Pod Spec be 100% compatible with the k8s Pod Spec.
The position I've been advocating for is actually to expand the surface of PodSpec that we allow to enable the binding pattern to target Service (as the subject) vs. forcing folks to reach around and use it with our Deployments. Sure if can be used to reach around us, but I agree that here it is inappropriate and overkill.
Can we extend the current list
I think we should absolutely expand the list, I have mixed feelings on hostPath (aka privilege), but we should discuss on a WG call. Especially with multiple container support coming the filesystem becomes an extremely interesting channel for intra-pod communication. The Google Cloud SQL proxy comes to mind 😉
I think at this point what we need is someone to drive the feature by putting together the appropriate feature track documentation and running it through the process.
Issues go stale after 90 days of inactivity.
Mark the issue as fresh by adding the comment /remove-lifecycle stale.
Stale issues rot after an additional 30 days of inactivity and eventually close.
If this issue is safe to close now please do so by adding the comment /close.
Send feedback to Knative Productivity Slack channel or file an issue in knative/test-infra.
/lifecycle stale
I think we still want this /remove-lifecycle stale
Yes, this could be behind a feature flag. I'll take a look after I add support for Downward API.
Hi, is there a workaround for this or is it a WIP?
A workaround is to use a Webhook to inject what you want in the Pod Spec. Not ideal. This is a WIP, but I don't think anyone is working on it right now.
@JRBANCEL I could have a look to this.
@JRBANCEL I could have a look to this.
Great. You can look a the various features behind feature flags for inspiration, for example: https://github.com/knative/serving/pull/8126
Thanks @JRBANCEL probably this needs an official design document/proposal. I will work on it.
/assign
This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.
It would be good to either document the principles here (e.g. avoid state storage and sharing between Pods as it's against the stateless design and tends to lead to awkward failure and scaling modes), and/or to make this a flag-guarded "default are safe, but you can unlock the hood and reach into the running engine if you must" list.
/triage accepted
I have a use case where we're looking to use Knative to facilitate autoscaling of machine learning services that load large artifacts on demand. To illustrate, services that look something like the TensorFlow embedding projector https://projector.tensorflow.org/ with a large embedding preloaded.
The k8s pod spec pattern we are currently using is an initContainer that copies artifacts from a PVC into emptyDir for the main container to then use. This allows relatively fast loading of these large (~1gb) artifacts compared to, for example, downloading from S3 on start up every time.
I was hoping to use Knative to allow pod autoscaling of these services, as various expensive machine types (eg. gpu's, high ram instances) are required, and having an instance of the service running for every combination of the artifacts is unfeasible.
Is an artifact loading + autoscaling use case like this out of scope for Knative?
Also, are there any further resources for the suggested workarounds? The Google Doc here is private.
The "Binding" pattern as talked about in https://docs.google.com/document/d/1t5WVrj2KQZ2u5s0LvIUtfHnSonBv5Vcv8Gl2k5NXrCQ/edit#heading=h.lnql658xmg9p could be a potential workaround to inject these into the deployment that Knative creates while we work on getting this issue resolved. See https://github.com/mattmoor/bindings for examples.
A workaround is to use a Webhook to inject what you want in the Pod Spec. Not ideal. This is a WIP, but I don't think anyone is working on it right now.
Edit: Is there an example for this pattern?
https://knative.tips/pod-config/volumes/ As a workaround for using other storage volumes, you can write native Kubernetes apps where you can mount such volumes and call them from Knative apps.
Similar to @yovizzle, I have a use case where we would like to use an init container to download static (but changing over time) files to a Pod on deployment, and emptyDir would be perfect for this. Otherwise I would also appreciate some documentation for how to use the mentioned workarounds.
hi @7adietri - just out of interest to understand the use case fully, what are the primary reasons to want to do this with an initContainer and emptyDir rather than having the main user container download the files on startup before responding to requests?
@julz Separation of concerns, mostly. The init container is using a cloud provider image and runs the provider's tool for downloading files from a bucket. The service/main container doesn't need to know about any of this, the files are "just there".
I have a concern around lifetimes and downloading content at init -- if the content changes, you could end up with a mix of content for an unknown duration as serving uses a mix of old and new Pods to handle requests until scaled down.
If there was a way to run the cloud provider image as a continuous sidecar, that would mitigate a lot of my concerns (rollback would still be harder, because there would be two different places to look).
@evankanderson In our case the URL changes with each content update and is part of the deployment manifest, so each content change causes a new deployment. Using different versions of the files until all Pods have been replaced is fine for us, and would probably be the same if they were continously downloaded into running Pods.
I have the same approach as @yovizzle. I am trying to keep machine learning model weights in separate container outside of serving container. By using initContainers I am copying the new model weights from the weights container to the unchanged serving container during the container startup through emptyDir volume mounts. In order to use knative without emptyDir I need to push my container with several gigabytes of weights to the registry as single container.
We also need the emptyDir support in order to make use of Knative. We're doing transformations of huge data volumes (potentially some GBs) where the intermediate results are stored in an embedded/local H2 database and retrieved via SQL. Above a certain H2 cache size these results are stored on disc.
We want to introduce autoscaling via Knative and our application seems to fit the requirements: These transformations are stateless and independent from each other, the cache is no longer used after the transformation finished. From my point of view emptyDir volumes look consistent to the Knative approach, I really hope they'll get implemented soon!