Shared not ready with dapr 1.14.4
When I use the Helm chart and set the image tag to version 1.14.4 in the values, while I have installed Dapr system at 1.14.4 version, the pod (whether in Deployment or DaemonSet mode) does not run. I see that it gets stuck with the following log: "Fetching initial identity certificate," and then it restarts.
Hi @luigirende, I tried your helm chart changes for the dapr 1.14.4 runtime, but my deployment pod is failing with the below error:
Defaulted container "daprd" out of: daprd, shared-init-container (init)
{"app_id":"atlan","instance":"dapr-shared-chart-dapr-shared-chart-66667c978d-zhn7s","level":"info","msg":"Starting Dapr Runtime -- version 1.14.4 -- commit 583960dc90120616124b60ad2b7820fc0b3edf44","scope":"dapr.runtime","time":"2024-11-30T06:23:28.547806045Z","type":"log","ver":"1.14.4"}
{"app_id":"atlan","instance":"dapr-shared-chart-dapr-shared-chart-66667c978d-zhn7s","level":"info","msg":"Log level set to: info","scope":"dapr.runtime","time":"2024-11-30T06:23:28.54786042Z","type":"log","ver":"1.14.4"}
{"app_id":"atlan","instance":"dapr-shared-chart-dapr-shared-chart-66667c978d-zhn7s","level":"fatal","msg":"trust anchors are required","scope":"dapr.runtime","time":"2024-11-30T06:23:28.548124712Z","type":"log","ver":"1.14.4"}
And the init container is not able to copy root cert from dapr control plane as my configmap created by init container is empty:
apiVersion: v1
data:
dapr-cert-chain: ""
dapr-cert-key: ""
dapr-trust-anchors: ""
kind: ConfigMap
metadata:
creationTimestamp: "2024-11-30T06:19:18Z"
name: dapr-shared-chart-shared-cm
namespace: atlan-dapr-shared
resourceVersion: "2430"
uid: 513610a4-1d9b-4d71-a231-271998b4bfe7
Any idea how to solve this?
This is my original issue: https://github.com/dapr/dapr-shared/issues/60
@ujala-singh have you deployed dapr in the default namespace dapr-system?
@luigirende Nope, I am deploying it in a custom namespace say my-dapr.
But I did override this value here:
controlPlane:
# -- Namespace where Dapr Control Plane is.
namespace: "my-dapr"
@ujala-singh have you tried to set the property
scheduler:
address: dapr-scheduler-server-0.dapr-scheduler-server.dapr-system.svc.cluster.local:50006
changing the namespace with my-dapr?
@luigirende Yes I did, I updated the below values:
scheduler:
address: dapr-scheduler-server-0.dapr-scheduler-server.my-dapr.svc.cluster.local:50006
controlPlane:
# -- Namespace where Dapr Control Plane is.
namespace: "my-dapr"
# -- Trust Domain used by the Dapr Control Plane
trustDomain: "cluster.local"
# -- The Dapr Control Plane operator address.
operator:
address: dapr-api.my-dapr.svc.cluster.local
port: 443
# -- The Dapr Control Plane sentry address.
sentry:
address: dapr-sentry.my-dapr.svc.cluster.local
port: 443
# -- The Dapr Control Plane placement server address.
placementServer:
address: dapr-placement-server.my-dapr.svc.cluster.local
port: 50005
It appears that the issue is still present. In order to make the shared dapr instances start, I had to do the following manual fixes:
- As noted in the issue above, the default configuration assumes ports 80 for some control plane components and they need to be overridden:
shared:
controlPlane:
sentryAddress: dapr-sentry.dapr-system.svc.cluster.local:443
operatorAddress: dapr-api.dapr-system.svc.cluster.local:443
placementServerAddress: dapr-placement-server.dapr-system.svc.cluster.local:50005
- (BUG) The Helm chart adds the
DAPR_CONTROL_PLANE_NAMESPACEenv variable, when the correct name isDAPR_CONTROLPLANE_NAMESPACE(note lack of _ in CONTROLPLANE). See https://github.com/dapr/dapr/blob/88f467a2feef18defec06b1425b8229b27a18045/pkg/security/consts/consts.go#L47
There is no way to adjust this env variable via the chart values or add new custom environment variables.