Controlling termination time for warm containers.
I'm using standalone openwhisk with functions running inside container. I created actions in python referring https://github.com/apache/openwhisk/blob/master/docs/actions-python.md When the action is invoked first time, the container starts in cold mode and then continues to run for some time (around 10 mins) without exiting. If any request comes within this time then request is served by the warm container. How can I make the container exit immediately after it serves each request i.e how can I make this container always start in cold mode for each and every request. Additionally I tried using ttl field in runtimes.json file for python to control when the warm container terminates but it doesnt seem to work. ttl of 5 mins and 2 mins are not making any difference on when the warm container terminates.
Environment details:
- local deployment
- ubuntu 18.04
This would likely need to be a new feature - “run and done” actions. Would not be hard to implement.
You can also try changing the idle container timeout - I don't know off hand if 0 minutes will work but you can try and let us know.
container-proxy {
timeouts {
# The "unusedTimeout" in the ContainerProxy,
#aka 'How long should a container sit idle until we kill it?'
idle-container = 10 minutes
pause-grace = 150 milliseconds
}
}
sure, will try that and let you know. Thanks
This would likely need to be a new feature - “run and done” actions. Would not be hard to implement. You can also try changing the idle container timeout - I don't know off hand if
0 minuteswill work but you can try and let us know.container-proxy { timeouts { # The "unusedTimeout" in the ContainerProxy, #aka 'How long should a container sit idle until we kill it?' idle-container = 10 minutes pause-grace = 150 milliseconds } }
tried changing idle container timeout to 1 minutes and it works. But changing it to 0 minutes didn't work and was reset to 1 minutes. I tried with 10 seconds, even that gets reset to 1 minutes.
Are you interested in implementing a run-and-done feature? I think it's compelling and would be happy to help you if you need help pursuing the feature.
I'd love to implement it. Thanks
Hi everyone, although I do not have a solution, do you mind answering some of my questions?
What are the rules of thumb for choosing a TTL for a container? For multiple function instances to use the same container, do they have to come from the same function or any function instance can use any available warm container?
@tiendatngcs Each action spawns its own containers, which are not shared with the others.
There are no rules of thumb regarding TTL. It depends on your workload and environment. It's a tradeoff.
You may find this paper useful. It categorized production workload in Azure and tried to minimize cold starts by optimizing configurations.