drone-runtime
drone-runtime copied to clipboard
Use a single namespace via kube-namespace flag
This PR is still missing:
- [ ] Clean all created resource in the namespace after pipeline finishes
- [ ] Services need to be unique even with same name?!
/cc @bradrydzewski @MOZGIII
@metalmatze One point I would like to throw in here is that owner reference could be used for the cleanup. The Job object is created, all other objects get a owner reference added pointing to the job or other object that makes sense. That would then mean that when the job is deleted the other objects are deleted as well automatically (this could be done per object AFAIK).
https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/
Though not 100% sure if this makes sense to use here. :slightly_smiling_face:
@galexrt, this makes a lot of sense and most of the things should be cleaned up automatically. But I agree that we should make sure it actually happens and the owner reference is perfect for that! :+1:
In the last few weeks I've given this quite a lot of thought. Especially for services that have the same name. Currently, I think we can work around the restrictions by templating the names of the services to something well known.
Let's say you want to start a Postgres Pod to use as a Drone service in your pipeline and want to reference that from your integration tests. It might happen that there are 2 (or more) concurrent pipelines for the same repository running and they both reference postgres.drone.svc.cluster.local. There's no way we can tell which pipeline should be routed to which Postgres Pod.
I propose to simply have app-42-postgres.drone.svc.cluster.local where app is the name of the repository and 42 is the build number. We basically namespace the Services by name on our own. :man_shrugging:
If I recall correctly there have been some heuristics around that for Docker in the past, right @bradrydzewski?
Is it possible to just use sub domains with the cluster DNS and define a different search domain for the involved pods?
@tboerger Mhh haven't tried that but that may work https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config (the search domain part at least).
That may cause K8S minimum version to be something around 1.10+ or so.
So far I like where this is going. Using GC and owner references sounds like a great idea. "Namespacing on our own" sounds good too - it's most likely the simplest solution in terms of relying on other parts of the cluster. Namspacing via DNS subdomains looks sort of ideal - if we can pull it off, I'd say with that we could even reconsider using a single namespace the default again, as all the constraints that I see that force using multiple namespaces will be fulfilled, and we'll have the benefits of using standard rbac for restricting access.
Please take a look at #53 - I think I found a solution to the name resolution issue.
@MOZGIII I think your conclusion is right. There is even an example here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-hostname-and-subdomain-fields basically the POD needs to have a Hostname/Subdomain Value, i.e.
hostname: postgres # of course that would brake scaling Statefulset/Deployments, but that is a non issue since there will always be one pod per Set/Deployment for Drone Services
subdomain: drone-123
searches:
- drone-123.my-namespace.svc.cluster.local
- a service with the name
drone-123and maybepublishNotReadyAddresses: trueinside the service.