falcosidekick-ui
falcosidekick-ui copied to clipboard
Blank charts in falcosidekick-ui but falco is working (ARM)
Describe the bug
The falcosidekick-ui is showing blank charts, I can see that in the top it is reporting the events though. Also, I can see that events are appearing in the Falco logs.
How to reproduce it
The installation command:
helm install falco falcosecurity/falco --namespace falco --create-na
mespace --values values.yaml
My values file:
driver:
kind: ebpf
falcosidekick:
enabled: true
replicaCount: 1
webui:
enabled: true
replicaCount: 1
user: "REDACTED"
ingress:
enabled: true
hosts:
- host: falco.magi.lan
paths:
- path: /
tls:
- hosts:
- falco.magi.lan
config:
customfields: "cluster:MAGI"
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
- key: CriticalAddonsOnly
operator: Exists
effect: NoSchedule
Expected behaviour
The UI charts should be populated with falco events.
Screenshots
Environment
- Falco version: Falco version: 0.34.1 (aarch64)
- System info:
{
"machine": "aarch64",
"nodename": "falco-6s564",
"release": "6.1.19-v8+",
"sysname": "Linux",
"version": "#1637 SMP PREEMPT Tue Mar 14 11:11:47 GMT 2023"
}
- Cloud provider or hardware configuration: 3 nodes K3S cluster, the nodes are raspberry pi 4 b with 4GB of RAM each.
- OS: Debian GNU/Linux 11 (raspbian bullseye)
- Kernel: Linux 6.1.19-v8+ #1637 SMP PREEMPT Tue Mar 14 11:11:47 GMT 2023 aarch64 GNU/Linux
- Installation method: Falco Helm chart
What are the logs in Falcosidekick pods?
Thanks for the early response!
falco-falcosidekick pod:
2023/04/04 11:51:06 [INFO] : Falco Sidekick version: 2.27.0
2023/04/04 11:51:06 [INFO] : Enabled Outputs : [WebUI]
2023/04/04 11:51:06 [INFO] : Falco Sidekick is up and listening on :2801
2023/04/04 11:52:39 [INFO] : WebUI - Post OK (200)
2023/04/04 11:53:06 [INFO] : WebUI - Post OK (200)
2023/04/04 11:53:08 [INFO] : WebUI - Post OK (200)
2023/04/04 11:53:23 [INFO] : WebUI - Post OK (200)
2023/04/04 11:53:24 [INFO] : WebUI - Post OK (200)
2023/04/04 11:53:25 [INFO] : WebUI - Post OK (200)
2023/04/04 11:53:33 [INFO] : WebUI - Post OK (200)
falco-falcosidekick-ui pod:
2023/04/04 11:51:30 [WARN] : Index does not exist
2023/04/04 11:51:30 [WARN] : Create Index
2023/04/04 11:51:30 [INFO] : Falcosidekick UI is listening on 0.0.0.0:2802
2023/04/04 11:51:30 [INFO] : log level is info
2023/04/04 11:56:38 [INFO] : user 'admin' authenticated
falco-falcosidekick-ui-redis
9:C 04 Apr 2023 11:51:07.412 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
9:C 04 Apr 2023 11:51:07.412 # Redis version=6.2.10, bits=64, commit=00000000, modified=0, pid=9, just started
9:C 04 Apr 2023 11:51:07.412 # Configuration loaded
9:M 04 Apr 2023 11:51:07.419 * monotonic clock: POSIX clock_gettime
9:M 04 Apr 2023 11:51:07.423 * Running mode=standalone, port=6379.
9:M 04 Apr 2023 11:51:07.423 # Server initialized
9:M 04 Apr 2023 11:51:07.462 * <search> Redis version found by RedisSearch : 6.2.10 - oss
9:M 04 Apr 2023 11:51:07.462 * <search> RediSearch version 2.6.5 (Git=HEAD-71bd22f3)
9:M 04 Apr 2023 11:51:07.462 * <search> Low level api version 1 initialized successfully
9:M 04 Apr 2023 11:51:07.462 * <search> concurrent writes: OFF, gc: ON, prefix min length: 2, prefix max expansions: 200, query timeout (ms): 500, timeout policy: return, cursor read size: 1000, cursor max idle (ms): 300000, max doctable size: 1000000, max number of search results: 10000, search pool size: 20, index pool size: 8,
9:M 04 Apr 2023 11:51:07.466 * <search> Initialized thread pool!
9:M 04 Apr 2023 11:51:07.466 * <search> Enabled diskless replication
9:M 04 Apr 2023 11:51:07.466 * <search> Enabled role change notification
9:M 04 Apr 2023 11:51:07.467 * Module 'search' loaded from /opt/redis-stack/lib/redisearch.so
9:M 04 Apr 2023 11:51:07.491 * <graph> Starting up RedisGraph version 2.10.8.
9:M 04 Apr 2023 11:51:07.510 * <graph> Thread pool created, using 4 threads.
9:M 04 Apr 2023 11:51:07.510 * <graph> Maximum number of OpenMP threads set to 4
9:M 04 Apr 2023 11:51:07.510 * Module 'graph' loaded from /opt/redis-stack/lib/redisgraph.so
9:M 04 Apr 2023 11:51:07.511 * <timeseries> RedisTimeSeries version 10805, git_sha=a7a296c8f3f6e312811032233a468d57d45957ca
9:M 04 Apr 2023 11:51:07.511 * <timeseries> Redis version found by RedisTimeSeries : 6.2.10 - oss
9:M 04 Apr 2023 11:51:07.511 * <timeseries> loaded default CHUNK_SIZE_BYTES policy: 4096
9:M 04 Apr 2023 11:51:07.511 * <timeseries> loaded server DUPLICATE_POLICY: block
9:M 04 Apr 2023 11:51:07.511 * <timeseries> Setting default series ENCODING to: compressed
9:M 04 Apr 2023 11:51:07.511 * <timeseries> Detected redis oss
9:M 04 Apr 2023 11:51:07.519 * <timeseries> Enabled diskless replication
9:M 04 Apr 2023 11:51:07.519 * Module 'timeseries' loaded from /opt/redis-stack/lib/redistimeseries.so
9:M 04 Apr 2023 11:51:07.520 * <ReJSON> version: 20404 git sha: eb5eba8 branch: HEAD
9:M 04 Apr 2023 11:51:07.520 * <ReJSON> Exported RedisJSON_V1 API
9:M 04 Apr 2023 11:51:07.520 * <ReJSON> Exported RedisJSON_V2 API
9:M 04 Apr 2023 11:51:07.520 * <ReJSON> Exported RedisJSON_V3 API
9:M 04 Apr 2023 11:51:07.520 * <ReJSON> Enabled diskless replication
9:M 04 Apr 2023 11:51:07.520 * <ReJSON> Created new data type 'ReJSON-RL'
9:M 04 Apr 2023 11:51:07.520 * Module 'ReJSON' loaded from /opt/redis-stack/lib/rejson.so
9:M 04 Apr 2023 11:51:07.520 * <search> Acquired RedisJSON_V3 API
9:M 04 Apr 2023 11:51:07.520 * <graph> Acquired RedisJSON_V1 API
9:M 04 Apr 2023 11:51:07.524 * <bf> RedisBloom version 2.4.4 (Git=unknown)
9:M 04 Apr 2023 11:51:07.524 * Module 'bf' loaded from /opt/redis-stack/lib/redisbloom.so
9:M 04 Apr 2023 11:51:07.526 * Ready to accept connections
9:M 04 Apr 2023 11:56:08.003 * 100 changes in 300 seconds. Saving...
9:M 04 Apr 2023 11:56:08.005 * Background saving started by pid 35
35:C 04 Apr 2023 11:56:08.013 * DB saved on disk
35:C 04 Apr 2023 11:56:08.015 * RDB: 0 MB of memory used by copy-on-write
9:M 04 Apr 2023 11:56:08.106 * Background saving terminated with success
The logs are 100% normal. I saw you issue in the charts repo, I may have break something for people using an Ingress with falcosidekick-UI by adding an auth mechanism. The strangest is you get the global counts in the header. Do you have any failures in your browser console?
The console is empty sadly 😢
No request with 401 error?
I would like to say yes but no...:
O_o no error in the logs, no error in the console, no error in the requests.
Can you try:
- to access directly to the falcosidekick-ui service with a port-forward
- change
Since
filter for a longer period
-
Same result using the port forwarding method:
-
The since filter did not help either
It is weird I know 😅, the most annoying thing is that the header is working as expected for some reason xD
Select 1Y for Since
to see if it's not an issue with Dates please
Screenshots with the since filter set to 1Y:
This is a mystery, I don't understand what's happening :sob:
Yeah... it is really frustrating 😭
I never noticed a such strange behavior, are you available on kubernetes.slack.com #falco? It will be easier to help I think.
Nope, but I guess I can just create an account for it
same here :(
For people facing the same issue, it is scoped to arm64 users.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Provide feedback via https://github.com/falcosecurity/community.
/lifecycle rotten
/remove-lifecycle rotten