Defined notes and rules for control BSI APP.4.4.A5
Description:
To check against BSI APP4.4.A5 this commit adds two rules
- etcd backup rule (manual)
- a check for CRDs of known backup solutions
Rationale:
-
Rationale here. Replace this text. Don't use the italics format!
-
Fixes # Issue number here (e.g. #26) or remove this line if no issue exists.
Review Hints:
For the etcd backup we could also do an automatic rule, since openshift can create automatic etcd backups. But this feature is behind a featuregate so implementing this is no good idea. It also requires additional permissions for the compliance operator (see https://github.com/sig-bsi-grundschutz/content/commit/cee59e48091374e3edc6d39fac104ba79650d41d )
The CRD check for backup solutions may be a false friend. While it checks if a backup solution is installed, it does not check, if the backup solution is correctly configured nor if a restore is possible. The rule might lead to a wrongful conclusion, that a backup exists.
Hi @sluetze. Thanks for your PR.
I'm waiting for a ComplianceAsCode member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Start a new ephemeral environment with changes proposed in this pull request:
ocp4 (from CTF) Environment (using Fedora as testing environment)
:robot: A k8s content image for this PR is available at:
ghcr.io/complianceascode/k8scontent:11717
This image was built from commit: e7b4cfa3b4b2b97edddd9716fd49ad232e7c69b2
Click here to see how to deploy it
If you alread have Compliance Operator deployed:
utils/build_ds_container.py -i ghcr.io/complianceascode/k8scontent:11717
Otherwise deploy the content and operator together by checking out ComplianceAsCode/compliance-operator and:
CONTENT_IMAGE=ghcr.io/complianceascode/k8scontent:11717 make deploy-local
Code Climate has analyzed commit 778a1be9 and detected 0 issues on this pull request.
The test coverage on the diff in this pull request is 100.0% (50% is the threshold).
This pull request will bring the total coverage in the repository to 59.3% (0.0% change).
View more on Code Climate.
/hold for test
Verification passed with 4.16.0-0.nightly-2024-03-25-100907 + compliance-operator + PR #11717 code
- Install Co
- Create ssb
$ oc get pb
NAME CONTENTIMAGE CONTENTFILE STATUS
ocp4 ghcr.io/complianceascode/k8scontent:latest ssg-ocp4-ds.xml VALID
rhcos4 ghcr.io/complianceascode/k8scontent:latest ssg-rhcos4-ds.xml VALID
upstream-ocp4 ghcr.io/complianceascode/k8scontent:11717 ssg-ocp4-ds.xml VALID
upstream-rhcos4 ghcr.io/complianceascode/k8scontent:11717 ssg-rhcos4-ds.xml VALID
$ oc compliance bind -N test profile/upstream-ocp4-bsi profile/upstream-ocp4-bsi-node
Creating ScanSettingBinding test
$ oc get suite
NAME PHASE RESULT
test DONE NON-COMPLIANT
$ oc get scan
NAME PHASE RESULT
upstream-ocp4-bsi DONE NON-COMPLIANT
upstream-ocp4-bsi-node-master DONE COMPLIANT
upstream-ocp4-bsi-node-worker DONE COMPLIANT
$ oc get cr
No resources found in openshift-compliance namespace.
$ oc get ccr -l compliance.openshift.io/automated-remediation=,compliance.openshift.io/check-status=FAIL
No resources found in openshift-compliance namespace.
$ oc get ccr
NAME STATUS SEVERITY
upstream-ocp4-bsi-api-server-anonymous-auth PASS medium
upstream-ocp4-bsi-etcd-backup MANUAL medium
upstream-ocp4-bsi-general-backup-solution-installed FAIL medium
upstream-ocp4-bsi-general-namespace-separation MANUAL medium
upstream-ocp4-bsi-kubeadmin-removed FAIL medium
upstream-ocp4-bsi-node-master-kubelet-anonymous-auth PASS medium
upstream-ocp4-bsi-node-worker-kubelet-anonymous-auth PASS medium
upstream-ocp4-bsi-ocp-insecure-allowed-registries-for-import PASS medium
upstream-ocp4-bsi-ocp-insecure-registries PASS medium
upstream-ocp4-bsi-rbac-least-privilege MANUAL high
upstream-ocp4-bsi-scc-limit-host-dir-volume-plugin MANUAL medium
upstream-ocp4-bsi-scc-limit-ipc-namespace MANUAL medium
upstream-ocp4-bsi-scc-limit-net-raw-capability MANUAL medium
upstream-ocp4-bsi-scc-limit-network-namespace MANUAL medium
upstream-ocp4-bsi-scc-limit-privileged-containers MANUAL medium
upstream-ocp4-bsi-scc-limit-process-id-namespace MANUAL medium
upstream-ocp4-bsi-scc-limit-root-containers MANUAL medium
- yaml of rule
upstream-ocp4-etcd-backup
$ oc get rule upstream-ocp4-etcd-backup -oyaml
apiVersion: compliance.openshift.io/v1alpha1
description: |-
Back up your clusters etcd data regularly and store in a secure location ideally outside the OpenShift Container Platform environment. Do not take an etcd backup before the first certificate rotation completes, which occurs 24 hours after installation, otherwise the backup will contain expired certificates. It is also recommended to take etcd backups during non-peak usage hours because the etcd snapshot has a high I/O cost.
For more information, follow the relevant documentation ( https://docs.openshift.com/container-platform/latest/backup_and_restore/control_plane_backup_and_restore/backing-up-etcd.html#backing-up-etcd-data_backup-etcd ).
id: xccdf_org.ssgproject.content_rule_etcd_backup
instructions: Ensure, that you have a process in place, that ensures that you do recurring
backups for etcd.
kind: Rule
metadata:
annotations:
compliance.openshift.io/image-digest: pb-upstream-ocp47mp6x
compliance.openshift.io/profiles: upstream-ocp4-bsi,upstream-ocp4-bsi-2022
compliance.openshift.io/rule: etcd-backup
creationTimestamp: "2024-03-26T13:43:17Z"
generation: 1
labels:
compliance.openshift.io/profile-bundle: upstream-ocp4
name: upstream-ocp4-etcd-backup
namespace: openshift-compliance
ownerReferences:
- apiVersion: compliance.openshift.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ProfileBundle
name: upstream-ocp4
uid: a897d6ca-b238-4d69-8688-cd9530b5ca95
resourceVersion: "163990"
uid: 6c697917-5a15-4e9e-b5c1-68c8b99b2986
rationale: While etcd automatically recovers from temporary failures, issues may arise
if an etcd cluster loses more than (N-1)/2 or when an update goes wrong. Recurring
backups of etcd enable you to recover from a disastrous fail.
severity: medium
title: Configure Recurring Backups For etcd
- yaml of rule
upstream-ocp4-general-backup-solution-installed
$ oc get rule upstream-ocp4-general-backup-solution-installed -oyaml
apiVersion: compliance.openshift.io/v1alpha1
checkType: Platform
description: Backup and Restore are fundamental practices when it comes to disaster
recovery. By utilizing a Backup Software you are able to backup (and restore) data,
which is lost, if your cluster crashes beyong recoverability. There are multiple
Backup Solutions on the Market which diverge in Features. Thus some of them might
only backup your cluster, others might also be able to backup VMs or PVCs running
in your Cluster.
id: xccdf_org.ssgproject.content_rule_general_backup_solution_installed
instructions: "Run the following command to retrieve the customresourcedefinitions
objects in the system:\n$ oc crds \nMake sure there is a crd of a backup solution.
Also make sure, that the backup solution is properly configured and that you are
able to recover from the backups.\nYou can add your known CRD to the var_backup_solution_crds_regex,
to allowlist your own backup solution."
kind: Rule
metadata:
annotations:
compliance.openshift.io/image-digest: pb-upstream-ocp47mp6x
compliance.openshift.io/profiles: upstream-ocp4-bsi-2022,upstream-ocp4-bsi
compliance.openshift.io/rule: general-backup-solution-installed
creationTimestamp: "2024-03-26T13:43:19Z"
generation: 1
labels:
compliance.openshift.io/profile-bundle: upstream-ocp4
name: upstream-ocp4-general-backup-solution-installed
namespace: openshift-compliance
ownerReferences:
- apiVersion: compliance.openshift.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ProfileBundle
name: upstream-ocp4
uid: a897d6ca-b238-4d69-8688-cd9530b5ca95
resourceVersion: "164091"
uid: 3bf38c15-891e-4846-b26f-a09a9084c68e
rationale: Backup and Recovery abilities are a necessity to recover from a disaster.
severity: medium
title: A Backup Solution Has To Be Installed
- yaml of var
upstream-ocp4-var-backup-solution-crds-regex
$ oc get var upstream-ocp4-var-backup-solution-crds-regex -oyaml
apiVersion: compliance.openshift.io/v1alpha1
description: '''A regular expression that lists all CRDs that are known to be part
of a backup solution'''
id: xccdf_org.ssgproject.content_value_var_backup_solution_crds_regex
kind: Variable
metadata:
annotations:
compliance.openshift.io/image-digest: pb-upstream-ocp47mp6x
creationTimestamp: "2024-03-26T13:43:13Z"
generation: 1
labels:
compliance.openshift.io/profile-bundle: upstream-ocp4
name: upstream-ocp4-var-backup-solution-crds-regex
namespace: openshift-compliance
ownerReferences:
- apiVersion: compliance.openshift.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ProfileBundle
name: upstream-ocp4
uid: a897d6ca-b238-4d69-8688-cd9530b5ca95
resourceVersion: "163899"
uid: 5f8d6e12-6fb4-46f5-b8cc-ad3de0b9d318
title: Known CRDs which are provided by backup solutions
type: string
value: ^DataProtectionApplication\\.oadp\\.openshift\\.io$|^backups\\.velero\\.io$|^policies\\.config\\.kio\\.kasten\\.io$
/unhold
/hold for test
/test e2e-aws-ocp4-bsi /test e2e-aws-ocp4-bsi-node /test e2e-aws-rhcos4-bsi
/test e2e-aws-ocp4-bsi /test e2e-aws-ocp4-bsi-node /test e2e-aws-rhcos4-bsi
/test e2e-aws-ocp4-bsi-node
@sluetze: Cannot trigger testing until a trusted user reviews the PR and leaves an /ok-to-test message.
In response to this:
/test e2e-aws-ocp4-bsi-node
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
hmm. cant trigger the test. But there isnt a node-rule in the testset so there should not be anything relevant.
I was unable to find anything relevant in the testlogs. for me it seems as if compliance-operator was not completely deployed, but I was a littlebit overwhelmed from all the logs.
/ok-to-test
@sluetze: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:
| Test name | Commit | Details | Required | Rerun command |
|---|---|---|---|---|
| ci/prow/4.13-images | e7b4cfa3b4b2b97edddd9716fd49ad232e7c69b2 | link | true | /test 4.13-images |
Full PR test history. Your PR dashboard.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.
/test e2e-aws-ocp4-bsi-node
Overriding the requirement for Ansible hardening tests. This is adding rules for OCP4 and should not affect Ansible behavior.