fix(helm): add missing env config on job
The job isn't working well when using an external database because the init container checking if the database is accessible isn't taking the same env values as the container that is initializing the database config
I propose to fix this little issue by simply uniformizing the env provided on the initContainer and the container.
The fix is minor, if you need more information, feel free to ask.
DryRun Security Summary
The pull request introduces several improvements to the Kubernetes Helm chart for the DefectDojo application, including enhanced configurability, security best practices, and better readability of the YAML configuration.
Expand for full summary
Summary:
The code changes in this pull request are related to the Kubernetes Helm chart for the DefectDojo application, specifically the initializer-job.yaml file. The changes introduce several improvements that enhance the configurability and security of the deployment process.
The key changes include the addition of support for extraEnv environment variables, which allows for greater customization options during deployment. Additionally, the formatting changes in the toYaml function usage improve the readability and consistency of the YAML configuration.
From a security perspective, the changes demonstrate a strong focus on best practices, such as the use of Kubernetes secrets for storing sensitive information, the inclusion of resource limits to prevent resource exhaustion, the use of an initialization container to ensure the database is available before the main application starts, and the support for the Cloud SQL Proxy container to securely connect to a Google Cloud SQL database.
Files Changed:
-
helm/defectdojo/templates/initializer-job.yaml: This file has been updated to include the following changes:- Support for
extraEnvenvironment variables, allowing for additional customization options during deployment. - Formatting changes in the
toYamlfunction usage, improving the readability and consistency of the YAML configuration. - Inclusion of Kubernetes secrets for storing sensitive information, such as the database password, which is a good security practice.
- Ability to set resource limits for the initializer container, preventing potential resource exhaustion and denial-of-service attacks.
- Use of an
initContainerssection to run await-for-dbcontainer, ensuring the database is available before the main application container starts. - Support for the Cloud SQL Proxy container, which helps protect the database connection from direct access.
- Support for
Overall, the changes in this pull request demonstrate a strong focus on improving the configurability and security of the DefectDojo Helm chart deployment, which is a positive development from an application security perspective.
Code Analysis
We ran 9 analyzers against 1 file and 0 analyzers had findings. 9 analyzers had no findings.
Riskiness
:green_circle: Risk threshold not exceeded.
Same as https://github.com/DefectDojo/django-DefectDojo/pull/11015#issuecomment-2399681231
I'm not sure, I understand the reason here. /wait-for-it.sh (the only command in wait-for-db container) is testing only TCP connection. It is not trying to log in to the DB. Why is DD_DATABASE_PASSWORD needed here?
@LeoFVO kiblik is correct here - we only take PRs against the dev or bugfix branches. This seems like a dev branch type of PR IMHO.
I'm not sure, I understand the reason here.
/wait-for-it.sh(the only command inwait-for-dbcontainer) is testing only TCP connection. It is not trying to log in to the DB. Why isDD_DATABASE_PASSWORDneeded here?
Alright, we could only add extraEnv here, password isn't useful :)
I removed the unused database password and pointed to dev branch. Feel free to squash if required.
I just noticed that the job doesn't have helm hook (https://helm.sh/docs/topics/charts_hooks/). I think we should consider adding:
"helm.sh/hook": pre-install, pre-upgrade
"helm.sh/hook-delete-policy": hook-succeeded
This will make the job run before installing or upgrading deployment for example. The actual chart make the deployment loop waiting migration to run, and in case of use of argocd for example, the job is constantly launched (looping infinitly) due to the TTL attached. The job is deleted after 60 sec, but restarted by argocd because missing in application. Adding Helm hooks will solve this issues in bonus.
I agree that right now, the setup might not be the best for Argo. I would avoid pre-install, pre-upgrade. It would create circular dependency and deployment might not be successful because the database needs to be started before the initializer.
I would maybe go with
- define that if
.Values.initializer.keepSecondsis 0 or undefined,ttlSecondsAfterFinishedneeds to be removed - drop date from initializer job - to avoid starting different jobs even if it is not needed
- guideline for Argo users (info on how to set values to be able to run start DD stack) - e.g. set
.Values.initializer.keepSecondsto zero
Agree, this has to be done in this PR or another one ?
Agree, this has to be done in this PR or another one ?
The change is not that large that it would "have to" be in separated PR. But smaller PRs are usually reviewed and accepted quicker.
Agree, this has to be done in this PR or another one ?
The change is not that large that it would "have to" be in separated PR. But smaller PRs are usually reviewed and accepted quicker.
Okay, i'll check to make theses changes in another PR than. Can this one be validated ?
I'm still not sure why a simple script (/wait-for-it.sh ${DD_DATABASE_HOST:-postgres}:${DD_DATABASE_PORT:-5432}) needs some additional EnvVars (because DD_DATABASE_HOST and DD_DATABASE_PORT are stored in already included ConfigMap) but l why not.
Maybe one more comment, there is one open PR (#11168) that is trying to harmonize some parts of HELM (use with instead of if). Can you apply this approach here as well, please?
Ohh I see, in the context of #11015, it makes sense.
Done