Command to display `service`-`app` relation
For developers it is crucial to get to know the information how to reach other (micro)services in the mesh. In Kubernetes, the default way to reach another application is to resolve the Service-name (see https://kubernetes.io/docs/concepts/services-networking/service/) with the cluster-internal DNS. Connect to a http-based Service, given the name and the port, like so:
r = requests.get("http://myservice-fancy:9090/")
However, if not documented anywhere, there is currently no way of knowing that from a developer perspective. This feature proposal for the Unikube CLI to solve that.
A K8s Service has a name attribute, for example:
apiVersion: v1
kind: Service
metadata:
name: myservice-fancy
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 9090
targetPort: 9376
In this example the Service is called myservice-fancy and listens on port 9090. One can request the Service with python code snipped from above.
The Service acts as a load balancer (don't confuse with the Type: Loadbalancer at this point) and forwards requests to Pods with the label app=my-app from the selector attribute.
The Unikube CLI command
$> unikube app list
currently translates to a kubectl get pods in the namespace of the Deck. I'd suggest to print all available Services, their ports and associated Pods in the already existing output of said command.
Please use the KubeAPI wrapper in order to integrate the required helper functions https://github.com/unikubehq/cli/blob/b436f5904df7f4434b7e92cb189e34ffccd584b0/src/local/system.py#L346
You can find examples how to list Pods and Services from here: https://github.com/kubernetes-client/python#examples
@schwobaseggl can help as well.
The idea is to list all services and find out their selector: and port attributes. The selector must be found in the labels on the corresponding pod from the list. That way you can make the matching and display that information (in a further column?). Mind that you may have a Service with no corresponding Pod as well as Pods with no corresponding Service.
I started a Python Lib which can be used for this purpose: https://github.com/schille/caboto It depends on introspecting the K8s manifests. That is a more complex approach:
- intercept the manifests during Deck installation and feed them into Caboto
- load the Caboto graph from the platform and analyse it in the CLI
- find a way to export yaml manifests of everything from the running cluster
I'd suggest the second option as is does not pose compatibility issues for different CLI versions and/or the Caboto format.
@Schille Returning the entire cabote graph as json might be difficult or overkill. What about Option 2b: Maybe we can use the new caboto query language feature make the caboto graph queriable via the graphql endpoint. So that one can query the exact information that one needs.
Too much complexity if you ask me.
- the UK platform has to maintain a Caboto graph for each commit that the user is running (or generate it ad-hoc)
- a further query, communication
- the knowledge of the applied workload manifests is rolled out to the CLI upon
deck installalready
Why not hook into the unikube deck install process and store the graph on the client-side? That would allow to ask Caboto questions even if the project is down (or the cluster not connected) and lower CLI response time.