kspan
kspan copied to clipboard
Instructions to run `kspan`
Hi,
Where can I find the instructions to run kspan
? I see in the README file there's a step to run the Jaeger UI, but not kspan
itself.
I tried using the image uploaded to Dockerhub from @bboreham account but it failed:
$ docker run bboreham/kspan:dev
2021-03-21T17:00:15.087Z ERROR controller-runtime.client.config unable to get kubeconfig {"error": "could not locate a kubeconfig"}
github.com/go-logr/zapr.(*zapLogger).Error
/go/pkg/mod/github.com/go-logr/[email protected]/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfigOrDie
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/client/config/config.go:162
main.main
/workspace/main.go:78
runtime.main
/usr/local/go/src/runtime/proc.go:204
Sorry it’s all a work in progress and there are no instructions.
The program needs to connect to a Kuberenyes api-server to receive events; that is the error you got. If you run it as a pod it will get the address automatically, then you will have to give it permissions.
It needs to connect to an OpenTelemetry service which will receive spans. That can be Jaeger, hence that line. Then you would view the spans in the UI of the receiver.
Thank you @bboreham
I have run it as a Pod and it has picked up the kubeconfig
file automatically:
At the same time, I'm running Jaeger with Docker:
$ docker -d --name jaeger -p 16686:16686 -p 55680:55680 jaegertracing/opentelemetry-all-in-one
but I don't see any spans in the UI coming from the workloads that I'm deploying.
Nice!
The pod will have to be able to contact Jaeger; you can set the -otlp-addr
CLI flag to point to your host address, which the docker ... -p
makes it listen on.
By default kspan will use otlp-collector.default:55680
, so the other approach would be to run Jaeger as a pod and add a Service with that name.
I have chosen the 2nd option - running Jaeger as a pod and adding the service.
I'll be doing some write-up about this tomorrow in case anyone wants to run it locally as well and I'll close the issue, thank you very much for your help @bboreham
I have now set up Continuous Integration checks and stamped a version 0.0.
Please use image docker.io/weaveworks/kspan:v0.0
rather than the dev
one.
If you'd like to contribute a "how to run" page to this repo, please do.
I'll be doing some write-up about this tomorrow
Any news on the write-up? Thanks!
@staranto I haven't updated the README file yet, so in the meantime please refer to this blog post.
@staranto I haven't updated the README file yet, so in the meantime please refer to this blog post.
Thanks Felipe! This is great. I've got everything deployed almost correctly. The kspan pod log is showing dozens of events in the cluster, however the Jaeger UI is only showing the jaeger-query service. Nothing else, including from the sample Nginx deployment. I'll keep digging.
@staranto Then it could be a communication issue between the kspan
container and the Jaeger container. This communication happens through an internal service named otlp-collector
in the default
namespace.
Have you created such a resource? If so, do you have if there are any endpoints?
apiVersion: v1
kind: Service
metadata:
labels:
app: jaeger
name: otlp-collector
spec:
ports:
- port: 55680
protocol: TCP
targetPort: 55680
selector:
app: jaeger
$ kubectl apply -f jaeger-svc.yaml
I believe the problem is a missing ClusterRoleBinding
. @staranto can you have a log at the kspan
logs via kubectl logs $POD_NAME
. You will see if the default
service account has access to everything it's need.
@thundering-herd I had the CRB in place already.
@felipecruz91 The problem was that I missed that it was using the default namespace. I had created everything in a new namespace. Moved it to default and all is well.
Thanks to you both!
Hey 👋 -- I'm trying to deploy kspan into my cluster, but I'm failing at some steps.
I've created the service for my jaeger, the kspan pod, clusterRole, clusterRoleBinding and serviceAccount.
YAML manifests
```yaml apiVersion: v1 kind: Service metadata: labels: app: jaeger name: otlp-collector namespace: monitoring spec: ports: - port: 55680 protocol: TCP targetPort: 55680 selector: app: jaeger app.kubernetes.io/component: all-in-one app.kubernetes.io/instance: jaeger app.kubernetes.io/managed-by: jaeger-operator app.kubernetes.io/name: jaeger app.kubernetes.io/part-of: jaeger --- apiVersion: v1 kind: Pod metadata: labels: run: kspan name: kspan namespace: monitoring spec: serviceAccountName: kspan containers: - image: docker.io/weaveworks/kspan:v0.0 args: - -otlp-addr=http://otlp-collector.monitoring.svc:55680 name: kspan dnsPolicy: ClusterFirst restartPolicy: Always --- apiVersion: v1 kind: ServiceAccount metadata: name: kspan namespace: monitoring --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kspan roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kspan subjects: - kind: ServiceAccount name: kspan namespace: monitoring --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kspan namespace: monitoring rules: - apiGroups: [""] resources: ["events", "pods"] verbs: ["list", "get", "watch"] - apiGroups: ["apps"] resources: ["deployments", "replicasets"] verbs: ["list", "get", "watch"] - apiGroups: ["networking.k8s.io"] resources: ["ingresses"] verbs: ["list", "get", "watch"] ```
Kspan seems to be running smoothly according to logs, it says that spans are being added and emitted. However, I can't see anything on Jaeger UI.
2021-04-14T19:32:06.580Z INFO emitting span {"ref": "replicaset:default/nginx-86c57db685", "name": "ReplicaSet.FailedCreate"}
2021-04-14T19:32:06.580Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "event", "request": "default/nginx-86c57db685.1675d057b8972602"}
2021-04-14T19:32:16.823Z INFO event {"event": "default/nginx-86c57db685.1675d057b8972602", "kind": "ReplicaSet", "reason": "FailedCreate", "source": "replicaset-controller"}
2021-04-14T19:32:16.826Z INFO adding span {"ref": "replicaset:default/nginx-86c57db685", "name": "ReplicaSet.FailedCreate", "start": "19:32:16.823", "end": "19:32:16.823"}
2021-04-14T19:32:16.826Z INFO emitting span {"ref": "replicaset:default/nginx-86c57db685", "name": "ReplicaSet.FailedCreate"}
2021-04-14T19:32:16.826Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "event", "request": "default/nginx-86c57db685.1675d057b8972602"}
2021-04-14T19:32:27.677Z INFO deferred emit {"ref": "replicaset:default/nginx-86c57db685", "name": "ReplicaSet.FailedCreate", "endTime": "2021-04-14T19:32:16.823Z", "threshold": "2021-04-14T19:32:17.677Z"}
2021-04-14T19:32:37.312Z INFO event {"event": "default/nginx-86c57db685.1675d057b8972602", "kind": "ReplicaSet", "reason": "FailedCreate", "source": "replicaset-controller"}
2021-04-14T19:32:37.316Z INFO adding span {"ref": "replicaset:default/nginx-86c57db685", "name": "ReplicaSet.FailedCreate", "start": "19:32:37.312", "end": "19:32:37.312"}
2021-04-14T19:32:37.316Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "event", "request": "default/nginx-86c57db685.1675d057b8972602"}
2021-04-14T19:32:47.677Z INFO deferred emit {"ref": "replicaset:default/nginx-86c57db685", "name": "ReplicaSet.FailedCreate", "endTime": "2021-04-14T19:32:37.312Z", "threshold": "2021-04-14T19:32:37.677Z"}
2021-04-14T19:33:18.284Z INFO event {"event": "default/nginx-86c57db685.1675d057b8972602", "kind": "ReplicaSet", "reason": "FailedCreate", "source": "replicaset-controller"}
2021-04-14T19:33:18.289Z INFO adding span {"ref": "replicaset:default/nginx-86c57db685", "name": "ReplicaSet.FailedCreate", "start": "19:33:18.284", "end": "19:33:18.284"}
2021-04-14T19:33:18.289Z DEBUG controller-runtime.controller Successfully Reconciled {"controller": "event", "request": "default/nginx-86c57db685.1675d057b8972602"}
2021-04-14T19:33:30.177Z INFO deferred emit {"ref": "replicaset:default/nginx-86c57db685", "name": "ReplicaSet.FailedCreate", "endTime": "2021-04-14T19:33:18.284Z", "threshold": "2021-04-14T19:33:20.177Z"}
2021-04-14T19:34:09.687Z INFO event {"event": "kube-system/resource-tracker.1674377efd72a372", "kind": "", "reason": "BigQueryUpload", "source": "resource-tracker"}
Does anyone know what I could be doing wrong? Any guidance is very much appreciated :)
I see you're deploying the kspan related resources to a namespace named monitoring
.
Quoting @bboreham:
By default kspan will use otlp-collector.default:55680
Try instead applying the manifest in the default namespace while @bboreham provides a solution :)
You have this:
-otlp-addr=http://otlp-collector.monitoring.svc:55680
Try -otlp-addr=otlp-collector.monitoring:55680
(no http
- it's a gRPC protocol)
Hey folks, saw this at Kubecon and had to give it a whirl. I've gotten kspan emitting span data according to the logs, but I don't see them in my Jaeger instance. Kspan is pointed at the Jaeger services in my cluster, but I notice that my Jaeger install does not expose port 55680. The pod only exposes ports 5775, 5778, 6831, 6832, 9411, 14250, 14267, 14268, 14269, and 16686 (for the GUI). I've tried directing kspan to each of those ports through the relevant service, except 16686, but none of them have worked. Is there an intermediate component or Jaeger config that's missing?
For ref, my Jaeger is deployed by generating manifests from the operator. I've added some bits for Istio ingress but it is otherwise a default all-in-one deployment.
@delve you may need a newer collector - see https://www.jaegertracing.io/docs/1.21/opentelemetry/
However that support is marked experimental.
I finally worked out the available deployment yaml
# all-in-one.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
app: jaeger
name: otlp-collector
spec:
ports:
- port: 55680
protocol: TCP
targetPort: 55680
selector:
app: jaeger
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: jaeger
name: jaeger
spec:
replicas: 1
selector:
matchLabels:
app: jaeger
strategy: {}
template:
metadata:
labels:
app: jaeger
spec:
containers:
- image: jaegertracing/opentelemetry-all-in-one
name: opentelemetry-all-in-one
resources: {}
ports:
- containerPort: 16685
- containerPort: 16686
- containerPort: 5775
protocol: UDP
- containerPort: 6831
protocol: UDP
- containerPort: 6832
protocol: UDP
- containerPort: 5778
protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kspan
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kspan-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kspan
namespace: default
---
apiVersion: v1
kind: Pod
metadata:
labels:
run: kspan
name: kspan
spec:
containers:
- image: docker.io/weaveworks/kspan:v0.0
name: kspan
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
serviceAccountName: kspan