Feature/pulsar manager v0.2.0 with jwt setup admin account creation
Fixes #133 Fixes #108 Fixes #86 Fixes #31
Motivation
The docker image for Pulsar Manager v0.2.0 includes support for JWT-based configurations, so long as the appropriate environment variables are specified:
- JWT_TOKEN
- PRIVATE_KEY/PUBLIC_KEY -or- SECRET_KEY
The k8s secrets that can populate each of the above environment variables are created by the 'prepare_helm_release.sh' script when it is run prior to installation of the helm chart. Unfortunately, the helm chart doesn't current provide a way to directly expose these secrets to the Pulsar Manager deployment. Instead, it requires that these values be separately set in the "configData" map of the "pulsar_manager" section, which creates the opportunity for them to go out of sync. Further, it exposes in clear text potentially sensitive values (the superuser token and the key material) in the ConfigMap alongside other environment settings.
Pulsar Manager v0.2.0 has also introduced a user management capability that is enabled by default, and requires that the initial admin/superuser account be created via web service call (see https://github.com/apache/pulsar-manager/blob/master/README.md).
Finally, the current helm chart mistakenly conflates the Pulsar Manager admin user's credentials with those of the Postgres user account used to establish a connection to the Pulsar Manager's internal database -- it mounts the admin user's credentials secret to the "USERNAME" and "PASSWORD" environment variables, despite the fact that these control only the datasource connection parameters as described above. Further, with the current secret mounting approach, changing the admin credentials actually breaks the Pulsar Manager deployment because the pulsar database credentials are hardcoded in the postgres db initialization script.
The changes included in this PR do the following:
- Enable mounting of the appropriate JWT and key secrets to the appropriate environment variables when JWT authentication is enabled;
- Automatically (when requested) creates the initial admin account via web service call on container startup. This initial account will be configured with the credentials contained in the manager's admin secret.
- The manager's admin secret will no longer be used to set the USERNAME and PASSWORD environment variables for reasons described above; hence, changing the admin secret will now correctly only affect the credentials recognized as the default login to the Pulsar Manager and will no longer break the connection to Pulsar Manager's internal database.
Modifications
- In charts/pulsar/values.yaml:
- Added auth.superUsers type "manager," which defaults to blank. Setting a value here determines the JWT secret (generated by the prepare_helm_release.sh script) that will be used by the Pulsar Manager to communicate with the Pulsar brokers when JWT authentication is enabled.
- Added pulsar_manager.autoCreateAdminAccount, which defaults to true. When this value is set to true, a sidecar will be added to the Pulsar Manager pod that will create the admin's account from the configured credentials when the container launches.
- Added 'USERNAME' and 'PASSWORD' with values of pulsar/pulsar to pulsar_manager.configData, since these values currently must agree with the hardcoded values in the database initialization script
- In charts/pulsar/templates/pulsar-manager-deployment.yaml:
- REMOVED the mounts of the pulsar admin secret to the USERNAME and PASSWORD environment variables.
- ADDED env secretKeyRefs for JWT_TOKEN to the "manager" superUser token when jwt authentication is enabled
- ADDED env file path value for SECRET_KEY when jwt authentication is enabled and jwt.usingSecretKey is true
- ADDED env file path value for PRIVATE_KEY and PUBLIC_KEY when jwt authentication is enabled and jwt.usingSecretKey is false
- ADDED volume mount for pulsar-manager-keys when jwt authentication is enabled
- ADDED volume for the symmetric token signing key when jwt authentication is enabled and jwt.usingSecretKey is true
- ADDED volume for the asymmetric token signing key when jwt authentication is enabled and jwt.usingSecretKey is false
- ADDED sidecar container to initialize admin user account when pulsar_manager.autoCreateAdminAccount is true. Upon launch, the sidecar will attempt to curl the pulsar-manager's main page every 3 seconds until it receives a successful status. At that point, it will perform the procedure described in the pulsar manager README.md (get a CSRF token, then make another service call to create the admin account). It will then sleep forever.
Verifying this change
- [x] Make sure that the change passes the CI checks.
Okay, tests done and passing, I think this is good to go.
@tuteng Could these changes be merged? I would love to be able to set the JWT via a Secret.
@csthomas1 would you like to rebase the changes?
@Mortom123 Please review this. Would something like this be useful?
Yes, I'll try to take a look at this later this week.
On Wed, Feb 14, 2024 at 1:22 PM Lari Hotari @.***> wrote:
@csthomas1 https://github.com/csthomas1 would you like to rebase the changes?
@Mortom123 https://github.com/Mortom123 Please review this. Would something like this be useful?
— Reply to this email directly, view it on GitHub https://github.com/apache/pulsar-helm-chart/pull/219#issuecomment-1944366753, or unsubscribe https://github.com/notifications/unsubscribe-auth/AQ5QVSJLE24LWENQJZKJXIDYTT6EZAVCNFSM5M63B5JKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOJUGQZTMNRXGUZQ . You are receiving this because you were mentioned.Message ID: @.***>
Yes, I'll try to take a look at this later this week.
@csthomas1 Awesome! Please ensure compatibility with changes that @Mortom123 made in #457. Looking forward to your contribution!
@csthomas1 It's possible to get a ssh shell into the build VM if you open a PR to your own fork. This comment and screenshot might help with that: https://github.com/apache/pulsar-helm-chart/pull/448#issuecomment-1919052024 . The ssh shell is only available in fork builds because of security reasons. It won't get activated for the master branch build in the fork since the rule is based a PR event.
In the shell, you can run k9s or kubectl get all -A and any kubectl commands after the cluster is running.