kompose icon indicating copy to clipboard operation
kompose copied to clipboard

Support for Job objects

Open sstarcher opened this issue 8 years ago • 22 comments

Feature request to support Job objects

sstarcher avatar Oct 25 '16 19:10 sstarcher

Interesting idea. I haven't played with Kubernetes jobs yet. How would Job map to docker-compose?

kadel avatar Oct 26 '16 09:10 kadel

I was under the incorrect assumption that values that were outside of docker-compose were flags, but after closer inspection it looks like the only flag is --replicas.

The values in the Job spec that are outside of the realm of docker-compose would be completions, parallelism , activeDeadlineSeconds

I'll close this if you think it's out of scope for kompose.

sstarcher avatar Oct 26 '16 10:10 sstarcher

Lets keep this open. We can maybe do something with Jobs in the future.

kadel avatar Oct 26 '16 11:10 kadel

@kadel I think this can be done using docker-compose labels to define service type, if a user defines label kompose.service.type: job and no ports specified then we create a job controller type than service and deployment? WDYT?

Inspired by: https://github.com/redhat-developer/opencompose/issues/50

surajssd avatar Nov 07 '16 07:11 surajssd

Currently we have couple of choices for preferences: kompose-specific labels, flags and preference file. We need to structure them correctly and make sure they are not overlapping.

Preference file: We are seeing it as a way to define profiles to be used including cluster info, default objects, probably user authentication (if we don't intend to rely on kubectl anymore). IMO It should only defines high-level properties. Somehow we should support 'kompose config profile' command.

Flags: for declaration at runtime like output format (chart/yml/json), additional objects (in this case job object should be declared here), output location. But IMO we should not define too many flags.

Kompose-specific label: we are using it for post-deploy action but I think it's really powerful and it makes kompose to be more specific. Lets discuss more to define its scope.

On Monday, November 7, 2016, Suraj Deshmukh [email protected] wrote:

@kadel https://github.com/kadel I think this can be done using docker-compose labels to define service type, if a user defines label kompose.service.type: job and no ports specified then we create a job controller type than service and deployment? WDYT?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-incubator/kompose/issues/236#issuecomment-258764776, or mute the thread https://github.com/notifications/unsubscribe-auth/AB31x1AMM57qGCNfQHewqp43CJjhNslZks5q7tNwgaJpZM4KgW_5 .

Regards, Tu Nguyen Sent from mobile

ngtuna avatar Nov 07 '16 09:11 ngtuna

So for creating jobs we can use docker-compose's restart for this, it's just we define some constraints like

  • default value of restart is always: where we create normal deployments or deploymentconfigs and services
  • if restart is on-failure we create job which restarts on failure
  • if restart is no we create job which does not restart on failure

e.g. about docker-compose restart https://github.com/docker/compose/blob/50faddb683d76567d64c8ef7dd1a09358f68a779/tests/fixtures/restart/docker-compose.yml

surajssd avatar Nov 17 '16 08:11 surajssd

default value of restart is always

From what I've seen the default value of restart seems to be no, i.e. the service is not restarted on exit (success / failure).

jpopelka avatar Nov 21 '16 12:11 jpopelka

@jpopelka, yes restart is no by default for docker-compose, I was suggesting how we go about interpreting this in kompose!

surajssd avatar Nov 21 '16 13:11 surajssd

Does docker-compose really have semantics for batch jobs ? I don't think we should try to "hack" k8s semantics on top of docker-compose ones. the restart is really about the pods being restarted by the kubelet or not.

sebgoa avatar Nov 21 '16 13:11 sebgoa

@sebgoa there is no direct way to define batch jobs in docker-compose. But you can get behavior of jobs if you define docker-compose service with restart: "no", it will act as job.

the restart is really about the pods being restarted by the kubelet or not.

I didn't understand what you are trying to say here?

surajssd avatar Nov 21 '16 14:11 surajssd

restart: "no" in a docker-compose is meant to tell docker to not restart the containers if they fail.

In k8s, a Pod has a restartPolicy. This is the clear one to one match.

If we start using a docker-compose semantic to do something in k8s that is not a one to one match, it will get confusing really quickly.

So my question is "how do you do batch processing in Docker swarm ?"

sebgoa avatar Nov 21 '16 14:11 sebgoa

If we start using a docker-compose semantic to do something in k8s that is not a one to one match, it will get confusing really quickly.

+1

kadel avatar Nov 22 '16 13:11 kadel

So for creating jobs we can use docker-compose's restart for this, it's just we define some constraints like

default value of restart is always: where we create normal deployments or deploymentconfigs and services if restart is on-failure we create job which restarts on failure if restart is no we create job which does not restart on failure e.g. about docker-compose restart https://github.com/docker/compose/blob/50faddb683d76567d64c8ef7dd1a09358f68a779/tests/fixtures/restart/docker-compose.yml

restart from docker-compose should map to restartPolicy in PodSpec

kadel avatar Nov 22 '16 14:11 kadel

I think the confusion between the above discussions comes down to the restartPolicy vs kubectl run

For version >= 1.3 Always creates a Deployment OnFailure creates a Job Never creates a Pod

From http://kubernetes.io/docs/user-guide/kubectl/kubectl_run/

 --generator string           The name of the API generator to use.  Default is 'deployment/v1beta1' if --restart=Always, 'job/v1' for OnFailure and 'run-pod/v1' for Never.  This will happen only for cluster version at least 1.3, for 1.2 we will fallback to 'deployment/v1beta1' for --restart=Always, 'job/v1' for others, for olders we will fallback to 'run/v1' for --restart=Always, 'run-pod/v1' for others.

sstarcher avatar Nov 22 '16 14:11 sstarcher

okay so i understand here is that we map docker-compose 's restart to PodSpec's restartPolicy, but it's done already in code.

surajssd avatar Nov 22 '16 14:11 surajssd

  • default value of restart is always: where we create normal deployments or deploymentconfigs and services
  • if restart is on-failure we create job which restarts on failure
  • if restart is no we create job which does not restart on failure

@surajssd In kubectl run, we share the same pattern (see --restart flag), except that when restart is no, a pod will be created, instead of a job

janetkuo avatar Nov 23 '16 18:11 janetkuo

There is also another thing that we should be aware. We are assuming different default value for restart than docker-compose does. If you don't specify restart docker uses no as default. But kompose is creating controller objects (Deployments, ReplicationSets..) and they all have restartPolicy: Always (podSpec in those objects can't have different restartPolicy)

kadel avatar Dec 05 '16 15:12 kadel

So what I understand from @janetkuo and @surajssd comments there is a proposal to handle restart values like this:

docker-compose restart value object created by kompose
no restart specified controller object
always controller object
no Pod
on-failure Job can have max-retires argument: on-failure[:max-retries]
unless-stopped controller object

Docker documentation regarding restart values https://docs.docker.com/engine/reference/run/#restart-policies---restart (this is same for docker-compose)

kadel avatar Dec 05 '16 15:12 kadel

ok fine with me, but Pods can have restart policies as well, and Jobs are much more than just Pods that restart.

sebgoa avatar Dec 07 '16 18:12 sebgoa

small modification to this. Lets start implementing this but when restart: on-failure than convert to Pod with restartPolicy= OnFailure

kadel avatar Dec 07 '16 18:12 kadel

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta. /lifecycle stale

fejta-bot avatar Jan 04 '18 02:01 fejta-bot

/lifecycle frozen

cdrage avatar Jan 04 '18 18:01 cdrage

6 year issue with no further discussions on whether we should implement this or not. Going to close, sorry!

cdrage avatar Nov 21 '22 15:11 cdrage