kompose
kompose copied to clipboard
Support for Job objects
Feature request to support Job objects
Interesting idea. I haven't played with Kubernetes jobs yet. How would Job map to docker-compose?
I was under the incorrect assumption that values that were outside of docker-compose were flags, but after closer inspection it looks like the only flag is --replicas.
The values in the Job spec that are outside of the realm of docker-compose would be completions, parallelism , activeDeadlineSeconds
I'll close this if you think it's out of scope for kompose.
Lets keep this open. We can maybe do something with Jobs in the future.
@kadel I think this can be done using docker-compose labels to define service type, if a user defines label kompose.service.type: job
and no ports specified then we create a job controller type than service and deployment? WDYT?
Inspired by: https://github.com/redhat-developer/opencompose/issues/50
Currently we have couple of choices for preferences: kompose-specific labels, flags and preference file. We need to structure them correctly and make sure they are not overlapping.
Preference file: We are seeing it as a way to define profiles to be used including cluster info, default objects, probably user authentication (if we don't intend to rely on kubectl anymore). IMO It should only defines high-level properties. Somehow we should support 'kompose config profile' command.
Flags: for declaration at runtime like output format (chart/yml/json), additional objects (in this case job object should be declared here), output location. But IMO we should not define too many flags.
Kompose-specific label: we are using it for post-deploy action but I think it's really powerful and it makes kompose to be more specific. Lets discuss more to define its scope.
On Monday, November 7, 2016, Suraj Deshmukh [email protected] wrote:
@kadel https://github.com/kadel I think this can be done using docker-compose labels to define service type, if a user defines label kompose.service.type: job and no ports specified then we create a job controller type than service and deployment? WDYT?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-incubator/kompose/issues/236#issuecomment-258764776, or mute the thread https://github.com/notifications/unsubscribe-auth/AB31x1AMM57qGCNfQHewqp43CJjhNslZks5q7tNwgaJpZM4KgW_5 .
Regards, Tu Nguyen Sent from mobile
So for creating jobs we can use docker-compose's restart
for this, it's just we define some constraints like
- default value of
restart
isalways
: where we create normaldeployments
ordeploymentconfigs
and services - if
restart
ison-failure
we createjob
which restarts on failure - if
restart
isno
we createjob
which does not restart on failure
e.g. about docker-compose restart https://github.com/docker/compose/blob/50faddb683d76567d64c8ef7dd1a09358f68a779/tests/fixtures/restart/docker-compose.yml
default value of restart is always
From what I've seen the default value of restart
seems to be no
, i.e. the service is not restarted on exit (success / failure).
@jpopelka, yes restart
is no
by default for docker-compose, I was suggesting how we go about interpreting this in kompose!
Does docker-compose really have semantics for batch jobs ? I don't think we should try to "hack" k8s semantics on top of docker-compose ones. the restart is really about the pods being restarted by the kubelet or not.
@sebgoa there is no direct way to define batch jobs in docker-compose. But you can get behavior of jobs if you define docker-compose service with restart: "no"
, it will act as job.
the restart is really about the pods being restarted by the kubelet or not.
I didn't understand what you are trying to say here?
restart: "no"
in a docker-compose is meant to tell docker to not restart the containers if they fail.
In k8s, a Pod has a restartPolicy
. This is the clear one to one match.
If we start using a docker-compose semantic to do something in k8s that is not a one to one match, it will get confusing really quickly.
So my question is "how do you do batch processing in Docker swarm ?"
If we start using a docker-compose semantic to do something in k8s that is not a one to one match, it will get confusing really quickly.
+1
So for creating jobs we can use docker-compose's restart for this, it's just we define some constraints like
default value of restart is always: where we create normal deployments or deploymentconfigs and services if restart is on-failure we create job which restarts on failure if restart is no we create job which does not restart on failure e.g. about docker-compose restart https://github.com/docker/compose/blob/50faddb683d76567d64c8ef7dd1a09358f68a779/tests/fixtures/restart/docker-compose.yml
restart from docker-compose should map to restartPolicy in PodSpec
I think the confusion between the above discussions comes down to the restartPolicy vs kubectl run
For version >= 1.3 Always creates a Deployment OnFailure creates a Job Never creates a Pod
From http://kubernetes.io/docs/user-guide/kubectl/kubectl_run/
--generator string The name of the API generator to use. Default is 'deployment/v1beta1' if --restart=Always, 'job/v1' for OnFailure and 'run-pod/v1' for Never. This will happen only for cluster version at least 1.3, for 1.2 we will fallback to 'deployment/v1beta1' for --restart=Always, 'job/v1' for others, for olders we will fallback to 'run/v1' for --restart=Always, 'run-pod/v1' for others.
okay so i understand here is that we map docker-compose
's restart
to PodSpec's restartPolicy, but it's done already in code.
- default value of
restart
isalways
: where we create normaldeployments
ordeploymentconfigs
and services- if
restart
ison-failure
we createjob
which restarts on failure- if
restart
isno
we createjob
which does not restart on failure
@surajssd In kubectl run
, we share the same pattern (see --restart
flag), except that when restart
is no
, a pod
will be created, instead of a job
There is also another thing that we should be aware.
We are assuming different default value for restart
than docker-compose does.
If you don't specify restart docker uses no
as default.
But kompose is creating controller objects (Deployments, ReplicationSets..) and they all have restartPolicy: Always
(podSpec in those objects can't have different restartPolicy)
So what I understand from @janetkuo and @surajssd comments there is a proposal to handle restart values like this:
docker-compose restart value | object created by kompose | |
---|---|---|
no restart specified | controller object | |
always |
controller object | |
no |
Pod | |
on-failure |
Job | can have max-retires argument: on-failure[:max-retries] |
unless-stopped |
controller object |
Docker documentation regarding restart values https://docs.docker.com/engine/reference/run/#restart-policies---restart (this is same for docker-compose)
ok fine with me, but Pods can have restart policies as well, and Jobs are much more than just Pods that restart.
small modification to this.
Lets start implementing this but when restart: on-failure
than convert to Pod with restartPolicy= OnFailure
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
/lifecycle frozen
6 year issue with no further discussions on whether we should implement this or not. Going to close, sorry!