parca-agent
parca-agent copied to clipboard
Current `main` doesn't start on Scaleway
It would be great to be able to run the latest version of Parca Agent on our demo.parca.dev instance.
Right now the agent doesn't start on Scaleway Kubernetes nodes logging the following:
level=warn name=parca-agent ts=2022-08-29T12:22:59.162801832Z caller=main.go:139 msg="failed to determine if eBPF is supported" err="kernel config not found"
level=error name=parca-agent ts=2022-08-29T12:22:59.162868702Z caller=main.go:121 err="host kernel does not support eBPF"
The pod is running image ghcr.io/parca-dev/parca-agent:main-5920bd0f
with arguments:
/bin/parca-agent
--log-level=info
--node=$(NODE_NAME)
--remote-store-address=parca.parca.svc.cluster.local:7070
--remote-store-insecure
--remote-store-insecure-skip-verify
When kubectl exec
into a pod (Parca pod since the agent has no shell inside) it seems that there is no /boot
directory available, nor any /proc/config*
files.
Could you try mounting the config with a hostPath
volume?
spec:
containers:
- name: parca-agent
volumeMounts:
- mountPath: /boot/config
name: kconfig
volumes:
- name: kconfig
hostPath:
path: /path/to/kconfig
type: File
Just tried this and also looked around on a similar host. The files kconfig is looking for don't exist on those machines.
I'll look into where kernel configs are stored in these environments(hint: probably /usr/src). Although I won't get time to tackle this for the next 3 weeks because of the priority issues+conference travel. So until then, I would like to reiterate my suggestion of keeping these logs at the warning level. These checks are for the user's ease and they shouldn't stop users from running the agent right now.
@v-thakkar I can have a look at this. Let's talk about it offline.
Still happening ghcr.io/parca-dev/parca-agent:main-4ba5c0a1
.
While we have reduced the logs to a warning level now, let's keep this open to add more config locations for the checks.
While we have reduced the logs to a warning level now, let's keep this open to add more config locations for the checks.
The PR that changed log level https://github.com/parca-dev/parca-agent/pull/875
I will add additional start-up checks for similar environments.
Just to clarify, the remediation in https://github.com/parca-dev/parca-agent/pull/875 does two things:
- changes the erroring with a warning log, to prevent the agent from exiting;
- fixes the logic as right now even if there's an error, we still check the
bpfEnabled
variable, which might not be correct depending on the error handling semantics of the function;
As you said, this is just remediation to unblock Parca Agent in environments where BPF is supported but the check fails, and we should fix the "BPF support" check
The config actually exists, probably, the "uname" value we generate does not math the pattern. I'll investigate further.






What's output of uname -r
?
What's output of
uname -r
?
5.4.0-122-generic
Ok, yeah then ideally it should be able to read the /boot/config-5.4.0-122-generic. Might be worth checking the permissions regarding reading the kernel config files. We may need to find a work around for the same then if that's the case.
/boot
is not mounted in the agent pods, that fixes it (did a quick try :cowboy_hat_face:):
spec:
containers:
- name: parca-agent
volumeMounts:
- mountPath: /boot
name: boot
volumes:
- name: boot
hostPath:
path: /boot