Block deployment if currently deploying
If a deploy is already in flight for a given service, we should prevent subsequent deploys if that service.
We could optionally allow a force parameter to cancel and cleanup a currently in-flight deploy and proceed with the new deploy request.
Thought: We could run a worker which processes deploys, and enqueue deploys sequentially.
If we do this within the process itself, that could work. Is that what you had in mind?
The pro is that we'd keep our list of dependencies low. The con is that restarts of deployster would cause that queue to empty (or we might need some complexity so that we could disable deploys, drain the queue, and deploy the new version).
Also, would we want to do all deploys sequentially or deploys for a given service sequentially? I haven't thought through the implications of each yet.
Sent from my iPhone
On Mar 22, 2015, at 11:21 AM, Ivan Vanderbyl [email protected] wrote:
Thought: We could run a worker which processes deploys, and enqueue deploys sequentially.
— Reply to this email directly or view it on GitHub.
Okay so level of complexity is obviously going to go up if we break that out in to a separate service, we'd need a persistence layer, Redis perhaps, to offer some levels of temporary persistence. We could maybe use etcd for persistence, but I'm not sure how that would look. I assume it supports collections?
Re: deploys sequentially for a service, I think this makes sense. It should be left up to the client to schedule dependent services.
Would it make sense to use dbus for this? Just throwing it out there because it's already available on coreos. Does that make sense or would it be a weird use? I need to read up more.
Isn't dbus only single node? Could be a problem if the deploy process is on another node, or did you mean use dbus to also launch the deploy process?
AFAIK, dbus isn't distributed across the cluster, so whatever was enqueueing and processing would have to be on the same node (or in the same process/container).
At first glance, I like the idea of using etcd, but it might be stretching it to do something it shouldn't be doing.
Are you against the idea of having the worker in the same process for now? I am mindful of the deployster service doing too much, but I also like the simplicity of setting up deployster right now.
I'm not apposed to it, it would be easy to implement as a goroutine, then Go has nice primitives for blocking deploys on a particular channel. Given that it's not CPU or Memory bound that could work nicely.
:+1: When we get closer to implementing this, we can discuss some other options for how to persist jobs. It might be as easy as linking in a data dir and using sqlite or some kind of serializable data structure.