panic caused by `fatal error: concurrent map writes` in sink.go
Expected Behavior
During normal operation the event-listener does not panic.
Actual Behavior
The event listener panics on concurrent write access to the extensions map.
Steps to Reproduce the Problem
Generally, the problem seems to occur with the use of TriggerGroups that target multiple Triggers that use extensions.
I've added a unit test with high concurrency (100 goroutines) that is able to reproduce the problem:
- https://github.com/tektoncd/triggers/pull/1866
go test -race is able to detect the problem with low concurrency (only 2 goroutines).
Additional Info
-
Kubernetes version:
Output of
kubectl version:➜ kubectl version Client Version: v1.31.2 Kustomize Version: v5.4.2 Server Version: v1.32.5-eks-5d4a308 -
Tekton Pipeline version:
Output of
tkn versionorkubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}':➜ tkn version Client version: 0.38.1 Pipeline version: v0.62.3 Triggers version: v0.29.1 Dashboard version: v0.49.0
These gists are dumps of the stack traces from panics due to this issue:
- https://gist.github.com/csullivanupgrade/feacef817bc8976957e515cbbee273ba
- https://gist.github.com/csullivanupgrade/981a4b63cb53aaa3e0b552587863fe1e