checkov icon indicating copy to clipboard operation
checkov copied to clipboard

Checkov returns empty output with Kubernetes YAML file

Open epodegrid opened this issue 2 years ago • 13 comments

Describe the issue Checkov produces empty output when scanning a kubernetes YAML file. I generated a YAML file from a helm chart using helm template prometheus > render.yaml. Then I used the command checkov -f render.yaml --framework kubernetes and the output is only the checkov logo.

       _               _              
   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.0.1065 

The YAML file is as follows:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: 2.2.0-rc.0
    chart: prometheus-1.0.32
    release: release-name
    heritage: Helm
  name: monitoring-k8s
  namespace: default
---
apiVersion: v1 
# document conitnues 

Additional context Systems tried: Ubuntu, WSL(Ubuntu, Debian) Fails on all systems.

Log info:

2022-04-15 18:52:52,711 [MainThread  ] [DEBUG]  Leveraging the bundled IAM Definition.
2022-04-15 18:52:52,711 [MainThread  ] [DEBUG]  Leveraging the IAM definition at /home/epodegrid/.local/lib/python3.9/site-packages/policy_sentry/shared/data/iam-definition.json
2022-04-15 18:52:52,826 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/bicep/checks/graph_checks
2022-04-15 18:52:52,904 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:52,979 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,002 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,002 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,034 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,040 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,074 [MainThread  ] [DEBUG]  Popen(['git', 'version'], cwd=/home/epodegrid/Desktop/glitchy, universal_newlines=False, shell=None, istream=None)
2022-04-15 18:52:53,081 [MainThread  ] [DEBUG]  Popen(['git', 'version'], cwd=/home/epodegrid/Desktop/glitchy, universal_newlines=False, shell=None, istream=None)
2022-04-15 18:52:53,182 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,182 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,182 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,183 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,183 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,201 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,288 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,392 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,393 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,394 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,397 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,397 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,397 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,398 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,400 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,401 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,403 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-15 18:52:53,408 [MainThread  ] [DEBUG]  No API key present; setting include_all_checkov_policies to True
2022-04-15 18:52:53,408 [MainThread  ] [DEBUG]  Checkov version: 2.0.1065
2022-04-15 18:52:53,408 [MainThread  ] [DEBUG]  Python executable: /usr/bin/python3
2022-04-15 18:52:53,408 [MainThread  ] [DEBUG]  Python version: 3.9.7 (default, Sep 10 2021, 14:59:43) 
[GCC 11.2.0]
2022-04-15 18:52:53,408 [MainThread  ] [DEBUG]  Checkov executable (argv[0]): /home/epodegrid/.local/bin/checkov
2022-04-15 18:52:53,408 [MainThread  ] [DEBUG]  Command Line Args:   -f render.yaml --framework kubernetes
Defaults:
  --branch:          master
  --download-external-modules:False
  --external-modules-download-path:.external_modules
  --evaluate-variables:True

2022-04-15 18:52:53,408 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): kubernetes
2022-04-15 18:52:53,408 [MainThread  ] [DEBUG]  kubernetes_runner declares no system dependency checks required.
2022-04-15 18:52:53,408 [MainThread  ] [DEBUG]  No API key found. Scanning locally only.
2022-04-15 18:52:54,331 [MainThread  ] [DEBUG]  Got checkov mappings and guidelines from Bridgecrew BE
2022-04-15 18:52:54,332 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/terraform/checks/graph_checks
2022-04-15 18:52:54,332 [MainThread  ] [DEBUG]  Searching through ['__pycache__', 'azure', 'gcp', 'aws'] and ['__init__.py']
2022-04-15 18:52:54,332 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-15 18:52:54,332 [MainThread  ] [DEBUG]  Searching through [] and ['VAsetPeriodicScansOnSQL.yaml', 'StorageLoggingIsEnabledForBlobService.yaml', 'CognitiveServicesCustomerManagedKey.yaml', 'MSQLenablesCustomerManagedKey.yaml', 'PGSQLenablesCustomerManagedKey.yaml', 'VMHasBackUpMachine.yaml', 'AzureStorageAccountsUseCustomerManagedKeyForEncryption.yaml', 'DataExplorerEncryptionUsesCustomKey.yaml', 'StorageLoggingIsEnabledForTableService.yaml', 'VAconfiguredToSendReportsToAdmins.yaml', 'AzureNetworkInterfacePublicIPAddressId.yaml', 'VAisEnabledInStorageAccount.yaml', 'VAconfiguredToSendReports.yaml', 'AzureAntimalwareIsConfiguredWithAutoUpdatesForVMs.yaml', 'AzureSynapseWorkspacesHaveNoIPFirewallRulesAttached.yaml', 'AzureUnattachedDisksAreEncrypted.yaml', 'StorageCriticalDataEncryptedCMK.yaml', 'AzureActiveDirectoryAdminIsConfigured.yaml', 'SQLServerAuditingEnabled.yaml', 'AccessToPostgreSQLFromAzureServicesIsDisabled.yaml', 'AzureDataFactoriesEncryptedWithCustomerManagedKey.yaml', 'StorageContainerActivityLogsNotPublic.yaml', 'VirtualMachinesUtilizingManagedDisks.yaml', 'SQLServerAuditingRetention90Days.yaml', 'AzureMSSQLServerHasSecurityAlertPolicy.yaml']
2022-04-15 18:52:54,443 [MainThread  ] [DEBUG]  Searching through [] and ['GCPProjectHasNoLegacyNetworks.yaml', 'GCPKMSKeyRingsAreNotPubliclyAccessible.yaml', 'DisableAccessToSqlDBInstanceForRootUsersWithoutPassword.yaml', 'GCPAuditLogsConfiguredForAllServicesAndUsers.yaml', 'GCPContainerRegistryReposAreNotPubliclyAccessible.yaml', 'GCPKMSCryptoKeysAreNotPubliclyAccessible.yaml', 'GCPLogBucketsConfiguredUsingLock.yaml', 'GKEClustersAreNotUsingDefaultServiceAccount.yaml', 'ServiceAccountHasGCPmanagedKey.yaml']
2022-04-15 18:52:54,494 [MainThread  ] [DEBUG]  Searching through [] and ['AWSNATGatewaysshouldbeutilized.yaml', 'APIProtectedByWAF.yaml', 'PostgresRDSHasQueryLoggingEnabled.yaml', 'S3BucketVersioning.yaml', 'PostgresDBHasQueryLoggingEnabled.yaml', 'EIPAllocatedToVPCAttachedEC2.yaml', 'S3PublicACLRead.yaml', 'CloudtrailHasCloudwatch.yaml', 'SubnetHasACL.yaml', 'ALBProtectedByWAF.yaml', 'S3BucketEncryption.yaml', 'S3PublicACLWrite.yaml', 'AWSSSMParameterShouldBeEncrypted.yaml', 'CloudFrontHasSecurityHeadersPolicy.yaml', 'WAF2HasLogs.yaml', 'RDSClusterHasBackupPlan.yaml', 'VPCHasFlowLog.yaml', 'AppSyncProtectedByWAF.yaml', 'GuardDutyIsEnabled.yaml', 'IAMUserHasNoConsoleAccess.yaml', 'EFSAddedBackup.yaml', 'SGAttachedToResource.yaml', 'AutoScalingEnableOnDynamoDBTables.yaml', 'IAMGroupHasAtLeastOneUser.yaml', 'IAMUsersAreMembersAtLeastOneGroup.yaml', 'S3BucketHasPublicAccessBlock.yaml', 'S3BucketLogging.yaml', 'APIGWLoggingLevelsDefinedProperly.yaml', 'Route53ARecordAttachedResource.yaml', 'S3KMSEncryptedByDefault.yaml', 'EBSAddedBackup.yaml', 'AutoScallingEnabledELB.yaml', 'AMRClustersNotOpenToInternet.yaml', 'VPCHasRestrictedSG.yaml', 'EncryptedEBSVolumeOnlyConnectedToEC2s.yaml', 'ALBRedirectsHTTPToHTTPS.yaml', 'HTTPNotSendingPasswords.yaml']
2022-04-15 18:52:54,694 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/cloudformation/checks/graph_checks
2022-04-15 18:52:54,694 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-15 18:52:54,694 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-15 18:52:54,694 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/kubernetes/checks/graph_checks
2022-04-15 18:52:54,694 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-15 18:52:54,694 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-15 18:52:54,694 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/bicep/checks/graph_checks
2022-04-15 18:52:54,694 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/terraform_plan/checks/graph_checks
2022-04-15 18:52:54,701 [MainThread  ] [ERROR]  Template file not found: render.yaml
2022-04-15 18:52:54,702 [MainThread  ] [INFO ]  creating kubernetes graph
2022-04-15 18:52:54,703 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/kubernetes/checks/graph_checks
2022-04-15 18:52:54,703 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-15 18:52:54,703 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']

       _               _              
   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.0.1065 

2022-04-15 18:52:54,704 [MainThread  ] [DEBUG]  Getting exit code for report kubernetes
2022-04-15 18:52:54,704 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-15 18:52:54,704 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0

epodegrid avatar Apr 15 '22 16:04 epodegrid

It seems checkov did not find any files in the path specified @epodegrid

nimrodkor avatar Apr 17 '22 05:04 nimrodkor

I looked into my file paths, and everything seems alright. I have 2 files in the folder,rbac.yaml and render.yaml. checkov works fine with rbac.yaml, but when passing render.yaml, it fails. Here are some more log reports for your reference.

Directory listing

-rw-rw-r--  1 epodegrid epodegrid  4519 Apr 15 19:52 rbac.yaml
-rw-rw-r--  1 epodegrid epodegrid 32543 Apr 16 11:51 render.yaml
-rw-rw-r--  1 epodegrid epodegrid     0 Apr 17 10:23 report.txt

Log for checkov, with rbac.yaml

2022-04-17 10:27:43,342 [MainThread  ] [DEBUG]  Leveraging the bundled IAM Definition.
2022-04-17 10:27:43,342 [MainThread  ] [DEBUG]  Leveraging the IAM definition at /home/epodegrid/.local/lib/python3.9/site-packages/policy_sentry/shared/data/iam-definition.json
2022-04-17 10:27:43,457 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/bicep/checks/graph_checks
2022-04-17 10:27:43,537 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,610 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,632 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,632 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,660 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,666 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,698 [MainThread  ] [DEBUG]  Popen(['git', 'version'], cwd=/home/epodegrid/Desktop/glitchy/example, universal_newlines=False, shell=None, istream=None)
2022-04-17 10:27:43,706 [MainThread  ] [DEBUG]  Popen(['git', 'version'], cwd=/home/epodegrid/Desktop/glitchy/example, universal_newlines=False, shell=None, istream=None)
2022-04-17 10:27:43,804 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,804 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,804 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,804 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,804 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,822 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,903 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,997 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,998 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:43,999 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:44,001 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:44,002 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:44,002 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:44,002 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:44,004 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:44,005 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:44,007 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:27:44,011 [MainThread  ] [DEBUG]  No API key present; setting include_all_checkov_policies to True
2022-04-17 10:27:44,011 [MainThread  ] [DEBUG]  Checkov version: 2.0.1066
2022-04-17 10:27:44,011 [MainThread  ] [DEBUG]  Python executable: /usr/bin/python3
2022-04-17 10:27:44,011 [MainThread  ] [DEBUG]  Python version: 3.9.7 (default, Sep 10 2021, 14:59:43) 
[GCC 11.2.0]
2022-04-17 10:27:44,011 [MainThread  ] [DEBUG]  Checkov executable (argv[0]): /home/epodegrid/.local/bin/checkov
2022-04-17 10:27:44,011 [MainThread  ] [DEBUG]  Command Line Args:   -f rbac.yaml --framework kubernetes
Defaults:
  --branch:          master
  --download-external-modules:False
  --external-modules-download-path:.external_modules
  --evaluate-variables:True

2022-04-17 10:27:44,012 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): kubernetes
2022-04-17 10:27:44,012 [MainThread  ] [DEBUG]  kubernetes_runner declares no system dependency checks required.
2022-04-17 10:27:44,012 [MainThread  ] [DEBUG]  No API key found. Scanning locally only.
2022-04-17 10:27:44,609 [MainThread  ] [DEBUG]  Got checkov mappings and guidelines from Bridgecrew BE
2022-04-17 10:27:44,610 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/terraform/checks/graph_checks
2022-04-17 10:27:44,610 [MainThread  ] [DEBUG]  Searching through ['__pycache__', 'azure', 'gcp', 'aws'] and ['__init__.py']
2022-04-17 10:27:44,610 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:27:44,610 [MainThread  ] [DEBUG]  Searching through [] and ['VAsetPeriodicScansOnSQL.yaml', 'StorageLoggingIsEnabledForBlobService.yaml', 'CognitiveServicesCustomerManagedKey.yaml', 'MSQLenablesCustomerManagedKey.yaml', 'PGSQLenablesCustomerManagedKey.yaml', 'VMHasBackUpMachine.yaml', 'AzureStorageAccountsUseCustomerManagedKeyForEncryption.yaml', 'DataExplorerEncryptionUsesCustomKey.yaml', 'StorageLoggingIsEnabledForTableService.yaml', 'VAconfiguredToSendReportsToAdmins.yaml', 'AzureNetworkInterfacePublicIPAddressId.yaml', 'VAisEnabledInStorageAccount.yaml', 'VAconfiguredToSendReports.yaml', 'AzureAntimalwareIsConfiguredWithAutoUpdatesForVMs.yaml', 'AzureSynapseWorkspacesHaveNoIPFirewallRulesAttached.yaml', 'AzureUnattachedDisksAreEncrypted.yaml', 'StorageCriticalDataEncryptedCMK.yaml', 'AzureActiveDirectoryAdminIsConfigured.yaml', 'SQLServerAuditingEnabled.yaml', 'AccessToPostgreSQLFromAzureServicesIsDisabled.yaml', 'AzureDataFactoriesEncryptedWithCustomerManagedKey.yaml', 'StorageContainerActivityLogsNotPublic.yaml', 'VirtualMachinesUtilizingManagedDisks.yaml', 'SQLServerAuditingRetention90Days.yaml', 'AzureMSSQLServerHasSecurityAlertPolicy.yaml']
2022-04-17 10:27:44,703 [MainThread  ] [DEBUG]  Searching through [] and ['GCPProjectHasNoLegacyNetworks.yaml', 'GCPKMSKeyRingsAreNotPubliclyAccessible.yaml', 'DisableAccessToSqlDBInstanceForRootUsersWithoutPassword.yaml', 'GCPAuditLogsConfiguredForAllServicesAndUsers.yaml', 'GCPContainerRegistryReposAreNotPubliclyAccessible.yaml', 'GCPKMSCryptoKeysAreNotPubliclyAccessible.yaml', 'GCPLogBucketsConfiguredUsingLock.yaml', 'GKEClustersAreNotUsingDefaultServiceAccount.yaml', 'ServiceAccountHasGCPmanagedKey.yaml']
2022-04-17 10:27:44,748 [MainThread  ] [DEBUG]  Searching through [] and ['AWSNATGatewaysshouldbeutilized.yaml', 'APIProtectedByWAF.yaml', 'PostgresRDSHasQueryLoggingEnabled.yaml', 'S3BucketVersioning.yaml', 'PostgresDBHasQueryLoggingEnabled.yaml', 'EIPAllocatedToVPCAttachedEC2.yaml', 'S3PublicACLRead.yaml', 'CloudtrailHasCloudwatch.yaml', 'SubnetHasACL.yaml', 'ALBProtectedByWAF.yaml', 'S3BucketEncryption.yaml', 'S3PublicACLWrite.yaml', 'AWSSSMParameterShouldBeEncrypted.yaml', 'CloudFrontHasSecurityHeadersPolicy.yaml', 'WAF2HasLogs.yaml', 'RDSClusterHasBackupPlan.yaml', 'VPCHasFlowLog.yaml', 'AppSyncProtectedByWAF.yaml', 'GuardDutyIsEnabled.yaml', 'IAMUserHasNoConsoleAccess.yaml', 'EFSAddedBackup.yaml', 'SGAttachedToResource.yaml', 'AutoScalingEnableOnDynamoDBTables.yaml', 'IAMGroupHasAtLeastOneUser.yaml', 'IAMUsersAreMembersAtLeastOneGroup.yaml', 'S3BucketHasPublicAccessBlock.yaml', 'S3BucketLogging.yaml', 'APIGWLoggingLevelsDefinedProperly.yaml', 'Route53ARecordAttachedResource.yaml', 'S3KMSEncryptedByDefault.yaml', 'EBSAddedBackup.yaml', 'AutoScallingEnabledELB.yaml', 'AMRClustersNotOpenToInternet.yaml', 'VPCHasRestrictedSG.yaml', 'EncryptedEBSVolumeOnlyConnectedToEC2s.yaml', 'ALBRedirectsHTTPToHTTPS.yaml', 'HTTPNotSendingPasswords.yaml']
2022-04-17 10:27:44,932 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/cloudformation/checks/graph_checks
2022-04-17 10:27:44,932 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:27:44,932 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:27:44,932 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/kubernetes/checks/graph_checks
2022-04-17 10:27:44,932 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:27:44,932 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:27:44,932 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/bicep/checks/graph_checks
2022-04-17 10:27:44,932 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/terraform_plan/checks/graph_checks
2022-04-17 10:27:44,967 [MainThread  ] [INFO ]  creating kubernetes graph
2022-04-17 10:27:44,974 [MainThread  ] [DEBUG]  Running check: The default namespace should not be used on file rbac.yaml
2022-04-17 10:27:44,975 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "ServiceAccount.ServiceAccount" check "The default namespace should not be used" Result: {'result': <CheckResult.FAILED: 'FAILED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,975 [MainThread  ] [DEBUG]  Running check: Ensure that default service accounts are not actively used on file rbac.yaml
2022-04-17 10:27:44,975 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "ServiceAccount.ServiceAccount" check "Ensure that default service accounts are not actively used" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,975 [MainThread  ] [DEBUG]  Running check: Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings on file rbac.yaml
2022-04-17 10:27:44,975 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "ClusterRole.ClusterRole" check "Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,975 [MainThread  ] [DEBUG]  Running check: Minimize wildcard use in Roles and ClusterRoles on file rbac.yaml
2022-04-17 10:27:44,975 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "ClusterRole.ClusterRole" check "Minimize wildcard use in Roles and ClusterRoles" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,975 [MainThread  ] [DEBUG]  Running check: Minimize ClusterRoles that grant permissions to approve CertificateSigningRequests on file rbac.yaml
2022-04-17 10:27:44,975 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "ClusterRole.ClusterRole" check "Minimize ClusterRoles that grant permissions to approve CertificateSigningRequests" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,975 [MainThread  ] [DEBUG]  Running check: Minimize ClusterRoles that grant control over validating or mutating admission webhook configurations on file rbac.yaml
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "ClusterRole.ClusterRole" check "Minimize ClusterRoles that grant control over validating or mutating admission webhook configurations" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  Running check: Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles on file rbac.yaml
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "ClusterRole.ClusterRole" check "Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  Running check: Ensure that default service accounts are not actively used on file rbac.yaml
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "ClusterRoleBinding.ClusterRoleBinding" check "Ensure that default service accounts are not actively used" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  Running check: The default namespace should not be used on file rbac.yaml
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "The default namespace should not be used" Result: {'result': <CheckResult.FAILED: 'FAILED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  Running check: Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings on file rbac.yaml
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  Running check: Minimize wildcard use in Roles and ClusterRoles on file rbac.yaml
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "Minimize wildcard use in Roles and ClusterRoles" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,976 [MainThread  ] [DEBUG]  Running check: Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles on file rbac.yaml
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  Running check: The default namespace should not be used on file rbac.yaml
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "The default namespace should not be used" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  Running check: Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings on file rbac.yaml
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  Running check: Minimize wildcard use in Roles and ClusterRoles on file rbac.yaml
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "Minimize wildcard use in Roles and ClusterRoles" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  Running check: Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles on file rbac.yaml
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  Running check: The default namespace should not be used on file rbac.yaml
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "The default namespace should not be used" Result: {'result': <CheckResult.FAILED: 'FAILED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  Running check: Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings on file rbac.yaml
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,977 [MainThread  ] [DEBUG]  Running check: Minimize wildcard use in Roles and ClusterRoles on file rbac.yaml
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "Minimize wildcard use in Roles and ClusterRoles" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  Running check: Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles on file rbac.yaml
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "Role.Role" check "Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  Running check: The default namespace should not be used on file rbac.yaml
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "RoleBinding.RoleBinding" check "The default namespace should not be used" Result: {'result': <CheckResult.FAILED: 'FAILED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  Running check: Ensure that default service accounts are not actively used on file rbac.yaml
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "RoleBinding.RoleBinding" check "Ensure that default service accounts are not actively used" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  Running check: The default namespace should not be used on file rbac.yaml
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "RoleBinding.RoleBinding" check "The default namespace should not be used" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  Running check: Ensure that default service accounts are not actively used on file rbac.yaml
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "RoleBinding.RoleBinding" check "Ensure that default service accounts are not actively used" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  Running check: The default namespace should not be used on file rbac.yaml
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "RoleBinding.RoleBinding" check "The default namespace should not be used" Result: {'result': <CheckResult.FAILED: 'FAILED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,978 [MainThread  ] [DEBUG]  Running check: Ensure that default service accounts are not actively used on file rbac.yaml
2022-04-17 10:27:44,979 [MainThread  ] [DEBUG]  File rbac.yaml, k8  "RoleBinding.RoleBinding" check "Ensure that default service accounts are not actively used" Result: {'result': <CheckResult.PASSED: 'PASSED'>, 'evaluated_keys': []} 
2022-04-17 10:27:44,979 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/kubernetes/checks/graph_checks
2022-04-17 10:27:44,979 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:27:44,979 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']

       _               _              
   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.0.1066 

2022-04-17 10:27:44,979 [MainThread  ] [DEBUG]  Getting exit code for report kubernetes
2022-04-17 10:27:44,979 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:27:44,979 [MainThread  ] [DEBUG]  There are failed checks and all soft/hard fail args are empty - returning 1
kubernetes scan results:

Passed checks: 21, Failed checks: 5, Skipped checks: 0

Check: CKV_K8S_41: "Ensure that default service accounts are not actively used"
	PASSED for resource: ServiceAccount.default.monitoring-k8s
	File: /rbac.yaml:3-16
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_38
Check: CKV_K8S_157: "Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings"
	PASSED for resource: ClusterRole.default.monitoring-k8s
	File: /rbac.yaml:18-48
Check: CKV_K8S_49: "Minimize wildcard use in Roles and ClusterRoles"
	PASSED for resource: ClusterRole.default.monitoring-k8s
	File: /rbac.yaml:18-48
	Guide: https://docs.bridgecrew.io/docs/ensure-minimized-wildcard-use-in-roles-and-clusterroles
Check: CKV_K8S_156: "Minimize ClusterRoles that grant permissions to approve CertificateSigningRequests"
	PASSED for resource: ClusterRole.default.monitoring-k8s
	File: /rbac.yaml:18-48
Check: CKV_K8S_155: "Minimize ClusterRoles that grant control over validating or mutating admission webhook configurations"
	PASSED for resource: ClusterRole.default.monitoring-k8s
	File: /rbac.yaml:18-48
Check: CKV_K8S_158: "Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles"
	PASSED for resource: ClusterRole.default.monitoring-k8s
	File: /rbac.yaml:18-48
Check: CKV_K8S_42: "Ensure that default service accounts are not actively used"
	PASSED for resource: ClusterRoleBinding.default.monitoring-k8s
	File: /rbac.yaml:50-70
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_38
Check: CKV_K8S_157: "Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings"
	PASSED for resource: Role.default.monitoring-k8s
	File: /rbac.yaml:72-97
Check: CKV_K8S_49: "Minimize wildcard use in Roles and ClusterRoles"
	PASSED for resource: Role.default.monitoring-k8s
	File: /rbac.yaml:72-97
	Guide: https://docs.bridgecrew.io/docs/ensure-minimized-wildcard-use-in-roles-and-clusterroles
Check: CKV_K8S_158: "Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles"
	PASSED for resource: Role.default.monitoring-k8s
	File: /rbac.yaml:72-97
Check: CKV_K8S_21: "The default namespace should not be used"
	PASSED for resource: Role.kube-system.monitoring-kube-system-k8s
	File: /rbac.yaml:99-119
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_20
Check: CKV_K8S_157: "Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings"
	PASSED for resource: Role.kube-system.monitoring-kube-system-k8s
	File: /rbac.yaml:99-119
Check: CKV_K8S_49: "Minimize wildcard use in Roles and ClusterRoles"
	PASSED for resource: Role.kube-system.monitoring-kube-system-k8s
	File: /rbac.yaml:99-119
	Guide: https://docs.bridgecrew.io/docs/ensure-minimized-wildcard-use-in-roles-and-clusterroles
Check: CKV_K8S_158: "Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles"
	PASSED for resource: Role.kube-system.monitoring-kube-system-k8s
	File: /rbac.yaml:99-119
Check: CKV_K8S_157: "Minimize Roles and ClusterRoles that grant permissions to bind RoleBindings or ClusterRoleBindings"
	PASSED for resource: Role.default.monitoring-default-k8s
	File: /rbac.yaml:121-141
Check: CKV_K8S_49: "Minimize wildcard use in Roles and ClusterRoles"
	PASSED for resource: Role.default.monitoring-default-k8s
	File: /rbac.yaml:121-141
	Guide: https://docs.bridgecrew.io/docs/ensure-minimized-wildcard-use-in-roles-and-clusterroles
Check: CKV_K8S_158: "Minimize Roles and ClusterRoles that grant permissions to escalate Roles or ClusterRoles"
	PASSED for resource: Role.default.monitoring-default-k8s
	File: /rbac.yaml:121-141
Check: CKV_K8S_42: "Ensure that default service accounts are not actively used"
	PASSED for resource: RoleBinding.default.monitoring-k8s
	File: /rbac.yaml:143-164
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_38
Check: CKV_K8S_21: "The default namespace should not be used"
	PASSED for resource: RoleBinding.kube-system.monitoring-kube-system-k8s
	File: /rbac.yaml:166-187
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_20
Check: CKV_K8S_42: "Ensure that default service accounts are not actively used"
	PASSED for resource: RoleBinding.kube-system.monitoring-kube-system-k8s
	File: /rbac.yaml:166-187
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_38
Check: CKV_K8S_42: "Ensure that default service accounts are not actively used"
	PASSED for resource: RoleBinding.default.monitoring-default-k8s
	File: /rbac.yaml:189-209
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_38
Check: CKV_K8S_21: "The default namespace should not be used"
	FAILED for resource: ServiceAccount.default.monitoring-k8s
	File: /rbac.yaml:3-16
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_20

		3  | apiVersion: v1
		4  | kind: ServiceAccount
		5  | metadata:
		6  |   labels:
		7  |     app: prometheus
		8  |     group: com.stakater.platform
		9  |     provider: stakater
		10 |     version: "2.2.0-rc.0"
		11 |     chart: "prometheus-1.0.32"
		12 |     release: "release-name"
		13 |     heritage: "Helm"
		14 |   name: monitoring-k8s
		15 |   namespace: default
		16 | ---

Check: CKV_K8S_21: "The default namespace should not be used"
	FAILED for resource: Role.default.monitoring-k8s
	File: /rbac.yaml:72-97
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_20

		72 | apiVersion: rbac.authorization.k8s.io/v1beta1
		73 | kind: Role
		74 | metadata:
		75 |   labels:
		76 |     app: prometheus
		77 |     group: com.stakater.platform
		78 |     provider: stakater
		79 |     version: "2.2.0-rc.0"
		80 |     chart: "prometheus-1.0.32"
		81 |     release: "release-name"
		82 |     heritage: "Helm"
		83 |   name: monitoring-k8s
		84 |   namespace: default
		85 | rules:
		86 | - apiGroups: [""]
		87 |   resources:
		88 |   - nodes
		89 |   - services
		90 |   - endpoints
		91 |   - pods
		92 |   verbs: ["get", "list", "watch"]
		93 | - apiGroups: [""]
		94 |   resources:
		95 |   - configmaps
		96 |   verbs: ["get"]
		97 | ---

Check: CKV_K8S_21: "The default namespace should not be used"
	FAILED for resource: Role.default.monitoring-default-k8s
	File: /rbac.yaml:121-141
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_20

		121 | apiVersion: rbac.authorization.k8s.io/v1beta1
		122 | kind: Role
		123 | metadata:
		124 |   labels:
		125 |     app: prometheus
		126 |     group: com.stakater.platform
		127 |     provider: stakater
		128 |     version: "2.2.0-rc.0"
		129 |     chart: "prometheus-1.0.32"
		130 |     release: "release-name"
		131 |     heritage: "Helm"
		132 |   name: monitoring-default-k8s
		133 |   namespace: default
		134 | rules:
		135 | - apiGroups: [""]
		136 |   resources:
		137 |   - services
		138 |   - endpoints
		139 |   - pods
		140 |   verbs: ["get", "list", "watch"]
		141 | ---

Check: CKV_K8S_21: "The default namespace should not be used"
	FAILED for resource: RoleBinding.default.monitoring-k8s
	File: /rbac.yaml:143-164
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_20

		143 | apiVersion: rbac.authorization.k8s.io/v1beta1
		144 | kind: RoleBinding
		145 | metadata:
		146 |   labels:
		147 |     app: prometheus
		148 |     group: com.stakater.platform
		149 |     provider: stakater
		150 |     version: "2.2.0-rc.0"
		151 |     chart: "prometheus-1.0.32"
		152 |     release: "release-name"
		153 |     heritage: "Helm"
		154 |   name: monitoring-k8s
		155 |   namespace: default
		156 | roleRef:
		157 |   apiGroup: rbac.authorization.k8s.io
		158 |   kind: Role
		159 |   name: monitoring-k8s
		160 | subjects:
		161 | - kind: ServiceAccount
		162 |   name: monitoring-k8s
		163 |   namespace: default
		164 | ---

Check: CKV_K8S_21: "The default namespace should not be used"
	FAILED for resource: RoleBinding.default.monitoring-default-k8s
	File: /rbac.yaml:189-209
	Guide: https://docs.bridgecrew.io/docs/bc_k8s_20

		189 | apiVersion: rbac.authorization.k8s.io/v1beta1
		190 | kind: RoleBinding
		191 | metadata:
		192 |   labels:
		193 |     app: prometheus
		194 |     group: com.stakater.platform
		195 |     provider: stakater
		196 |     version: "2.2.0-rc.0"
		197 |     chart: "prometheus-1.0.32"
		198 |     release: "release-name"
		199 |     heritage: "Helm"
		200 |   name: monitoring-default-k8s
		201 |   namespace: default
		202 | roleRef:
		203 |   apiGroup: rbac.authorization.k8s.io
		204 |   kind: Role
		205 |   name: monitoring-default-k8s
		206 | subjects:
		207 | - kind: ServiceAccount
		208 |   name: monitoring-k8s
		209 |   namespace: default

Log for checkov, render.yaml

2022-04-17 10:29:04,364 [MainThread  ] [DEBUG]  Leveraging the bundled IAM Definition.
2022-04-17 10:29:04,364 [MainThread  ] [DEBUG]  Leveraging the IAM definition at /home/epodegrid/.local/lib/python3.9/site-packages/policy_sentry/shared/data/iam-definition.json
2022-04-17 10:29:04,472 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/bicep/checks/graph_checks
2022-04-17 10:29:04,549 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,622 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,644 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,644 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,673 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,680 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,711 [MainThread  ] [DEBUG]  Popen(['git', 'version'], cwd=/home/epodegrid/Desktop/glitchy/example, universal_newlines=False, shell=None, istream=None)
2022-04-17 10:29:04,718 [MainThread  ] [DEBUG]  Popen(['git', 'version'], cwd=/home/epodegrid/Desktop/glitchy/example, universal_newlines=False, shell=None, istream=None)
2022-04-17 10:29:04,809 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,809 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,809 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,810 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,810 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,825 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:04,906 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,000 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,000 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,001 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,004 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,004 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,004 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,005 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,006 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,007 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,009 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:29:05,013 [MainThread  ] [DEBUG]  No API key present; setting include_all_checkov_policies to True
2022-04-17 10:29:05,013 [MainThread  ] [DEBUG]  Checkov version: 2.0.1066
2022-04-17 10:29:05,013 [MainThread  ] [DEBUG]  Python executable: /usr/bin/python3
2022-04-17 10:29:05,014 [MainThread  ] [DEBUG]  Python version: 3.9.7 (default, Sep 10 2021, 14:59:43) 
[GCC 11.2.0]
2022-04-17 10:29:05,014 [MainThread  ] [DEBUG]  Checkov executable (argv[0]): /home/epodegrid/.local/bin/checkov
2022-04-17 10:29:05,014 [MainThread  ] [DEBUG]  Command Line Args:   -f render.yaml --framework kubernetes
Defaults:
  --branch:          master
  --download-external-modules:False
  --external-modules-download-path:.external_modules
  --evaluate-variables:True

2022-04-17 10:29:05,014 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): kubernetes
2022-04-17 10:29:05,014 [MainThread  ] [DEBUG]  kubernetes_runner declares no system dependency checks required.
2022-04-17 10:29:05,014 [MainThread  ] [DEBUG]  No API key found. Scanning locally only.
2022-04-17 10:29:05,127 [MainThread  ] [DEBUG]  Got checkov mappings and guidelines from Bridgecrew BE
2022-04-17 10:29:05,127 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/terraform/checks/graph_checks
2022-04-17 10:29:05,127 [MainThread  ] [DEBUG]  Searching through ['__pycache__', 'azure', 'gcp', 'aws'] and ['__init__.py']
2022-04-17 10:29:05,127 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:29:05,127 [MainThread  ] [DEBUG]  Searching through [] and ['VAsetPeriodicScansOnSQL.yaml', 'StorageLoggingIsEnabledForBlobService.yaml', 'CognitiveServicesCustomerManagedKey.yaml', 'MSQLenablesCustomerManagedKey.yaml', 'PGSQLenablesCustomerManagedKey.yaml', 'VMHasBackUpMachine.yaml', 'AzureStorageAccountsUseCustomerManagedKeyForEncryption.yaml', 'DataExplorerEncryptionUsesCustomKey.yaml', 'StorageLoggingIsEnabledForTableService.yaml', 'VAconfiguredToSendReportsToAdmins.yaml', 'AzureNetworkInterfacePublicIPAddressId.yaml', 'VAisEnabledInStorageAccount.yaml', 'VAconfiguredToSendReports.yaml', 'AzureAntimalwareIsConfiguredWithAutoUpdatesForVMs.yaml', 'AzureSynapseWorkspacesHaveNoIPFirewallRulesAttached.yaml', 'AzureUnattachedDisksAreEncrypted.yaml', 'StorageCriticalDataEncryptedCMK.yaml', 'AzureActiveDirectoryAdminIsConfigured.yaml', 'SQLServerAuditingEnabled.yaml', 'AccessToPostgreSQLFromAzureServicesIsDisabled.yaml', 'AzureDataFactoriesEncryptedWithCustomerManagedKey.yaml', 'StorageContainerActivityLogsNotPublic.yaml', 'VirtualMachinesUtilizingManagedDisks.yaml', 'SQLServerAuditingRetention90Days.yaml', 'AzureMSSQLServerHasSecurityAlertPolicy.yaml']
2022-04-17 10:29:05,216 [MainThread  ] [DEBUG]  Searching through [] and ['GCPProjectHasNoLegacyNetworks.yaml', 'GCPKMSKeyRingsAreNotPubliclyAccessible.yaml', 'DisableAccessToSqlDBInstanceForRootUsersWithoutPassword.yaml', 'GCPAuditLogsConfiguredForAllServicesAndUsers.yaml', 'GCPContainerRegistryReposAreNotPubliclyAccessible.yaml', 'GCPKMSCryptoKeysAreNotPubliclyAccessible.yaml', 'GCPLogBucketsConfiguredUsingLock.yaml', 'GKEClustersAreNotUsingDefaultServiceAccount.yaml', 'ServiceAccountHasGCPmanagedKey.yaml']
2022-04-17 10:29:05,263 [MainThread  ] [DEBUG]  Searching through [] and ['AWSNATGatewaysshouldbeutilized.yaml', 'APIProtectedByWAF.yaml', 'PostgresRDSHasQueryLoggingEnabled.yaml', 'S3BucketVersioning.yaml', 'PostgresDBHasQueryLoggingEnabled.yaml', 'EIPAllocatedToVPCAttachedEC2.yaml', 'S3PublicACLRead.yaml', 'CloudtrailHasCloudwatch.yaml', 'SubnetHasACL.yaml', 'ALBProtectedByWAF.yaml', 'S3BucketEncryption.yaml', 'S3PublicACLWrite.yaml', 'AWSSSMParameterShouldBeEncrypted.yaml', 'CloudFrontHasSecurityHeadersPolicy.yaml', 'WAF2HasLogs.yaml', 'RDSClusterHasBackupPlan.yaml', 'VPCHasFlowLog.yaml', 'AppSyncProtectedByWAF.yaml', 'GuardDutyIsEnabled.yaml', 'IAMUserHasNoConsoleAccess.yaml', 'EFSAddedBackup.yaml', 'SGAttachedToResource.yaml', 'AutoScalingEnableOnDynamoDBTables.yaml', 'IAMGroupHasAtLeastOneUser.yaml', 'IAMUsersAreMembersAtLeastOneGroup.yaml', 'S3BucketHasPublicAccessBlock.yaml', 'S3BucketLogging.yaml', 'APIGWLoggingLevelsDefinedProperly.yaml', 'Route53ARecordAttachedResource.yaml', 'S3KMSEncryptedByDefault.yaml', 'EBSAddedBackup.yaml', 'AutoScallingEnabledELB.yaml', 'AMRClustersNotOpenToInternet.yaml', 'VPCHasRestrictedSG.yaml', 'EncryptedEBSVolumeOnlyConnectedToEC2s.yaml', 'ALBRedirectsHTTPToHTTPS.yaml', 'HTTPNotSendingPasswords.yaml']
2022-04-17 10:29:05,459 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/cloudformation/checks/graph_checks
2022-04-17 10:29:05,459 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:29:05,459 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:29:05,459 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/kubernetes/checks/graph_checks
2022-04-17 10:29:05,459 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:29:05,459 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:29:05,459 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/bicep/checks/graph_checks
2022-04-17 10:29:05,459 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/terraform_plan/checks/graph_checks
2022-04-17 10:29:05,467 [MainThread  ] [INFO ]  creating kubernetes graph
2022-04-17 10:29:05,468 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/kubernetes/checks/graph_checks
2022-04-17 10:29:05,468 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:29:05,468 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']

       _               _              
   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.0.1066 

2022-04-17 10:29:05,469 [MainThread  ] [DEBUG]  Getting exit code for report kubernetes
2022-04-17 10:29:05,469 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:29:05,469 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0

Hope this helps in figuring out what might be going wrong.

epodegrid avatar Apr 17 '22 08:04 epodegrid

Both files are multi-document yaml files, and I have tried checkov on the file in Linux and WSL. Another thing to add is that when running without --framework kubernetes, (from what I understand) checkov simply skips the Kubernetes scans on render.yaml. Works fine with rbac.yaml.

Here are the logs for your reference.

2022-04-17 10:34:50,313 [MainThread  ] [DEBUG]  Leveraging the bundled IAM Definition.
2022-04-17 10:34:50,313 [MainThread  ] [DEBUG]  Leveraging the IAM definition at /home/epodegrid/.local/lib/python3.9/site-packages/policy_sentry/shared/data/iam-definition.json
2022-04-17 10:34:50,426 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/bicep/checks/graph_checks
2022-04-17 10:34:50,506 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,580 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,604 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,604 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,633 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,640 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,672 [MainThread  ] [DEBUG]  Popen(['git', 'version'], cwd=/home/epodegrid/Desktop/glitchy/example, universal_newlines=False, shell=None, istream=None)
2022-04-17 10:34:50,681 [MainThread  ] [DEBUG]  Popen(['git', 'version'], cwd=/home/epodegrid/Desktop/glitchy/example, universal_newlines=False, shell=None, istream=None)
2022-04-17 10:34:50,777 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,778 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,778 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,778 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,778 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,794 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,875 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,962 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,963 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,964 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,966 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,967 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,967 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,967 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,969 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,969 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,972 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,975 [MainThread  ] [DEBUG]  No API key present; setting include_all_checkov_policies to True
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  Checkov version: 2.0.1066
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  Python executable: /usr/bin/python3
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  Python version: 3.9.7 (default, Sep 10 2021, 14:59:43) 
[GCC 11.2.0]
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  Checkov executable (argv[0]): /home/epodegrid/.local/bin/checkov
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  Command Line Args:   -f render.yaml
Defaults:
  --framework:       ['all']
  --branch:          master
  --download-external-modules:False
  --external-modules-download-path:.external_modules
  --evaluate-variables:True

2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  Resultant set of frameworks (removing skipped frameworks): all
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  terraform_runner declares no system dependency checks required.
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  cloudformation_runner declares no system dependency checks required.
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  kubernetes_runner declares no system dependency checks required.
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  serverless_runner declares no system dependency checks required.
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  arm_runner declares no system dependency checks required.
2022-04-17 10:34:50,976 [MainThread  ] [DEBUG]  terraform_plan_runner declares no system dependency checks required.
2022-04-17 10:34:50,976 [MainThread  ] [INFO ]  Checking necessary system dependancies for helm checks.
2022-04-17 10:34:51,034 [MainThread  ] [INFO ]  Found working version of helm dependancies: v3.8.2
2022-04-17 10:34:51,034 [MainThread  ] [DEBUG]  dockerfile_runner declares no system dependency checks required.
2022-04-17 10:34:51,034 [MainThread  ] [DEBUG]  secrets_runner declares no system dependency checks required.
2022-04-17 10:34:51,034 [MainThread  ] [DEBUG]  json_runner declares no system dependency checks required.
2022-04-17 10:34:51,034 [MainThread  ] [DEBUG]  yaml_runner declares no system dependency checks required.
2022-04-17 10:34:51,034 [MainThread  ] [DEBUG]  github_configuration_runner declares no system dependency checks required.
2022-04-17 10:34:51,034 [MainThread  ] [DEBUG]  gitlab_configuration_runner declares no system dependency checks required.
2022-04-17 10:34:51,034 [MainThread  ] [DEBUG]  bitbucket_configuration_runner declares no system dependency checks required.
2022-04-17 10:34:51,034 [MainThread  ] [INFO ]  Checking necessary system dependancies for kustomize checks.
2022-04-17 10:34:51,090 [MainThread  ] [INFO ]  Found working version of kustomize dependancy kubectl: 1.23
2022-04-17 10:34:51,091 [MainThread  ] [DEBUG]  sca_package_runner declares no system dependency checks required.
2022-04-17 10:34:51,091 [MainThread  ] [DEBUG]  github_actions_runner declares no system dependency checks required.
2022-04-17 10:34:51,091 [MainThread  ] [DEBUG]  bicep_runner declares no system dependency checks required.
2022-04-17 10:34:51,091 [MainThread  ] [DEBUG]  openapi_runner declares no system dependency checks required.
2022-04-17 10:34:51,091 [MainThread  ] [DEBUG]  sca_image_runner declares no system dependency checks required.
2022-04-17 10:34:51,091 [MainThread  ] [DEBUG]  No API key found. Scanning locally only.
2022-04-17 10:34:51,193 [MainThread  ] [DEBUG]  Got checkov mappings and guidelines from Bridgecrew BE
2022-04-17 10:34:51,193 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/terraform/checks/graph_checks
2022-04-17 10:34:51,194 [MainThread  ] [DEBUG]  Searching through ['__pycache__', 'azure', 'gcp', 'aws'] and ['__init__.py']
2022-04-17 10:34:51,194 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:34:51,194 [MainThread  ] [DEBUG]  Searching through [] and ['VAsetPeriodicScansOnSQL.yaml', 'StorageLoggingIsEnabledForBlobService.yaml', 'CognitiveServicesCustomerManagedKey.yaml', 'MSQLenablesCustomerManagedKey.yaml', 'PGSQLenablesCustomerManagedKey.yaml', 'VMHasBackUpMachine.yaml', 'AzureStorageAccountsUseCustomerManagedKeyForEncryption.yaml', 'DataExplorerEncryptionUsesCustomKey.yaml', 'StorageLoggingIsEnabledForTableService.yaml', 'VAconfiguredToSendReportsToAdmins.yaml', 'AzureNetworkInterfacePublicIPAddressId.yaml', 'VAisEnabledInStorageAccount.yaml', 'VAconfiguredToSendReports.yaml', 'AzureAntimalwareIsConfiguredWithAutoUpdatesForVMs.yaml', 'AzureSynapseWorkspacesHaveNoIPFirewallRulesAttached.yaml', 'AzureUnattachedDisksAreEncrypted.yaml', 'StorageCriticalDataEncryptedCMK.yaml', 'AzureActiveDirectoryAdminIsConfigured.yaml', 'SQLServerAuditingEnabled.yaml', 'AccessToPostgreSQLFromAzureServicesIsDisabled.yaml', 'AzureDataFactoriesEncryptedWithCustomerManagedKey.yaml', 'StorageContainerActivityLogsNotPublic.yaml', 'VirtualMachinesUtilizingManagedDisks.yaml', 'SQLServerAuditingRetention90Days.yaml', 'AzureMSSQLServerHasSecurityAlertPolicy.yaml']
2022-04-17 10:34:51,277 [MainThread  ] [DEBUG]  Searching through [] and ['GCPProjectHasNoLegacyNetworks.yaml', 'GCPKMSKeyRingsAreNotPubliclyAccessible.yaml', 'DisableAccessToSqlDBInstanceForRootUsersWithoutPassword.yaml', 'GCPAuditLogsConfiguredForAllServicesAndUsers.yaml', 'GCPContainerRegistryReposAreNotPubliclyAccessible.yaml', 'GCPKMSCryptoKeysAreNotPubliclyAccessible.yaml', 'GCPLogBucketsConfiguredUsingLock.yaml', 'GKEClustersAreNotUsingDefaultServiceAccount.yaml', 'ServiceAccountHasGCPmanagedKey.yaml']
2022-04-17 10:34:51,320 [MainThread  ] [DEBUG]  Searching through [] and ['AWSNATGatewaysshouldbeutilized.yaml', 'APIProtectedByWAF.yaml', 'PostgresRDSHasQueryLoggingEnabled.yaml', 'S3BucketVersioning.yaml', 'PostgresDBHasQueryLoggingEnabled.yaml', 'EIPAllocatedToVPCAttachedEC2.yaml', 'S3PublicACLRead.yaml', 'CloudtrailHasCloudwatch.yaml', 'SubnetHasACL.yaml', 'ALBProtectedByWAF.yaml', 'S3BucketEncryption.yaml', 'S3PublicACLWrite.yaml', 'AWSSSMParameterShouldBeEncrypted.yaml', 'CloudFrontHasSecurityHeadersPolicy.yaml', 'WAF2HasLogs.yaml', 'RDSClusterHasBackupPlan.yaml', 'VPCHasFlowLog.yaml', 'AppSyncProtectedByWAF.yaml', 'GuardDutyIsEnabled.yaml', 'IAMUserHasNoConsoleAccess.yaml', 'EFSAddedBackup.yaml', 'SGAttachedToResource.yaml', 'AutoScalingEnableOnDynamoDBTables.yaml', 'IAMGroupHasAtLeastOneUser.yaml', 'IAMUsersAreMembersAtLeastOneGroup.yaml', 'S3BucketHasPublicAccessBlock.yaml', 'S3BucketLogging.yaml', 'APIGWLoggingLevelsDefinedProperly.yaml', 'Route53ARecordAttachedResource.yaml', 'S3KMSEncryptedByDefault.yaml', 'EBSAddedBackup.yaml', 'AutoScallingEnabledELB.yaml', 'AMRClustersNotOpenToInternet.yaml', 'VPCHasRestrictedSG.yaml', 'EncryptedEBSVolumeOnlyConnectedToEC2s.yaml', 'ALBRedirectsHTTPToHTTPS.yaml', 'HTTPNotSendingPasswords.yaml']
2022-04-17 10:34:51,507 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/cloudformation/checks/graph_checks
2022-04-17 10:34:51,507 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:34:51,507 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:34:51,507 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/kubernetes/checks/graph_checks
2022-04-17 10:34:51,507 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:34:51,507 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:34:51,507 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/bicep/checks/graph_checks
2022-04-17 10:34:51,507 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/terraform_plan/checks/graph_checks
2022-04-17 10:34:51,514 [MainThread  ] [INFO ]  Scanning root folder and producing fresh tf_definitions and context
2022-04-17 10:34:51,524 [MainThread  ] [INFO ]  Creating vertices
2022-04-17 10:34:51,525 [MainThread  ] [INFO ]  Creating edges
2022-04-17 10:34:51,525 [MainThread  ] [INFO ]  Rendering variables, graph has 0 vertices and 0 edges
2022-04-17 10:34:51,525 [MainThread  ] [INFO ]  done evaluating edges
2022-04-17 10:34:51,525 [MainThread  ] [INFO ]  done evaluate_non_rendered_values
2022-04-17 10:34:51,526 [MainThread  ] [DEBUG]  Created definitions context
2022-04-17 10:34:51,526 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/terraform/checks/graph_checks
2022-04-17 10:34:51,526 [MainThread  ] [DEBUG]  Searching through ['__pycache__', 'azure', 'gcp', 'aws'] and ['__init__.py']
2022-04-17 10:34:51,526 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:34:51,526 [MainThread  ] [DEBUG]  Searching through [] and ['VAsetPeriodicScansOnSQL.yaml', 'StorageLoggingIsEnabledForBlobService.yaml', 'CognitiveServicesCustomerManagedKey.yaml', 'MSQLenablesCustomerManagedKey.yaml', 'PGSQLenablesCustomerManagedKey.yaml', 'VMHasBackUpMachine.yaml', 'AzureStorageAccountsUseCustomerManagedKeyForEncryption.yaml', 'DataExplorerEncryptionUsesCustomKey.yaml', 'StorageLoggingIsEnabledForTableService.yaml', 'VAconfiguredToSendReportsToAdmins.yaml', 'AzureNetworkInterfacePublicIPAddressId.yaml', 'VAisEnabledInStorageAccount.yaml', 'VAconfiguredToSendReports.yaml', 'AzureAntimalwareIsConfiguredWithAutoUpdatesForVMs.yaml', 'AzureSynapseWorkspacesHaveNoIPFirewallRulesAttached.yaml', 'AzureUnattachedDisksAreEncrypted.yaml', 'StorageCriticalDataEncryptedCMK.yaml', 'AzureActiveDirectoryAdminIsConfigured.yaml', 'SQLServerAuditingEnabled.yaml', 'AccessToPostgreSQLFromAzureServicesIsDisabled.yaml', 'AzureDataFactoriesEncryptedWithCustomerManagedKey.yaml', 'StorageContainerActivityLogsNotPublic.yaml', 'VirtualMachinesUtilizingManagedDisks.yaml', 'SQLServerAuditingRetention90Days.yaml', 'AzureMSSQLServerHasSecurityAlertPolicy.yaml']
2022-04-17 10:34:51,527 [MainThread  ] [DEBUG]  Parsed file render.yaml incorrectly {}
2022-04-17 10:34:51,528 [MainThread  ] [INFO ]  creating cloudformation graph
2022-04-17 10:34:51,528 [MainThread  ] [INFO ]  [CloudformationLocalGraph] created 0 vertices
2022-04-17 10:34:51,528 [MainThread  ] [INFO ]  [CloudformationLocalGraph] created 0 edges
2022-04-17 10:34:51,528 [MainThread  ] [INFO ]  Rendering variables, graph has 0 vertices and 0 edges
2022-04-17 10:34:51,529 [MainThread  ] [INFO ]  done evaluating edges
2022-04-17 10:34:51,529 [MainThread  ] [INFO ]  done evaluate_non_rendered_values
2022-04-17 10:34:51,529 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/cloudformation/checks/graph_checks
2022-04-17 10:34:51,529 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:34:51,529 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:34:51,543 [MainThread  ] [INFO ]  Running with --file argument; checking for Helm Chart.yaml files
2022-04-17 10:34:51,548 [MainThread  ] [INFO ]  creating kubernetes graph
2022-04-17 10:34:51,549 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/kubernetes/checks/graph_checks
2022-04-17 10:34:51,549 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:34:51,549 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:34:51,551 [MainThread  ] [DEBUG]  Failed to load /home/epodegrid/Desktop/glitchy/example/render.yaml as is not a .json file, skipping
2022-04-17 10:34:51,554 [MainThread  ] [INFO ]  Secrets scanning will scan 1 files
2022-04-17 10:34:51,559 [MainThread  ] [DEBUG]  The runner requires that external checks are defined.
2022-04-17 10:34:51,559 [MainThread  ] [DEBUG]  The runner requires that external checks are defined.
2022-04-17 10:34:51,571 [MainThread  ] [DEBUG]  Environment variable BITBUCKET_REPO_FULL_NAME was not set. Cannot fetch branch restrictions.
2022-04-17 10:34:51,575 [MainThread  ] [INFO ]  Running with --file argument; file must be a kustomization.yaml file
2022-04-17 10:34:51,585 [MainThread  ] [INFO ]  creating kubernetes graph
2022-04-17 10:34:51,586 [MainThread  ] [DEBUG]  Loading external checks from /home/epodegrid/.local/lib/python3.9/site-packages/checkov/kubernetes/checks/graph_checks
2022-04-17 10:34:51,586 [MainThread  ] [DEBUG]  Searching through ['__pycache__'] and ['__init__.py']
2022-04-17 10:34:51,586 [MainThread  ] [DEBUG]  Searching through [] and ['__init__.cpython-39.pyc']
2022-04-17 10:34:51,586 [MainThread  ] [DEBUG]  Sucessfully ran k8s scan on Kustomization templated files in tmp scan dir : /tmp/tmpfiqi4ys2
2022-04-17 10:34:51,694 [MainThread  ] [DEBUG]  Searching through [] and ['GCPProjectHasNoLegacyNetworks.yaml', 'GCPKMSKeyRingsAreNotPubliclyAccessible.yaml', 'DisableAccessToSqlDBInstanceForRootUsersWithoutPassword.yaml', 'GCPAuditLogsConfiguredForAllServicesAndUsers.yaml', 'GCPContainerRegistryReposAreNotPubliclyAccessible.yaml', 'GCPKMSCryptoKeysAreNotPubliclyAccessible.yaml', 'GCPLogBucketsConfiguredUsingLock.yaml', 'GKEClustersAreNotUsingDefaultServiceAccount.yaml', 'ServiceAccountHasGCPmanagedKey.yaml']
2022-04-17 10:34:51,745 [MainThread  ] [DEBUG]  Searching through [] and ['AWSNATGatewaysshouldbeutilized.yaml', 'APIProtectedByWAF.yaml', 'PostgresRDSHasQueryLoggingEnabled.yaml', 'S3BucketVersioning.yaml', 'PostgresDBHasQueryLoggingEnabled.yaml', 'EIPAllocatedToVPCAttachedEC2.yaml', 'S3PublicACLRead.yaml', 'CloudtrailHasCloudwatch.yaml', 'SubnetHasACL.yaml', 'ALBProtectedByWAF.yaml', 'S3BucketEncryption.yaml', 'S3PublicACLWrite.yaml', 'AWSSSMParameterShouldBeEncrypted.yaml', 'CloudFrontHasSecurityHeadersPolicy.yaml', 'WAF2HasLogs.yaml', 'RDSClusterHasBackupPlan.yaml', 'VPCHasFlowLog.yaml', 'AppSyncProtectedByWAF.yaml', 'GuardDutyIsEnabled.yaml', 'IAMUserHasNoConsoleAccess.yaml', 'EFSAddedBackup.yaml', 'SGAttachedToResource.yaml', 'AutoScalingEnableOnDynamoDBTables.yaml', 'IAMGroupHasAtLeastOneUser.yaml', 'IAMUsersAreMembersAtLeastOneGroup.yaml', 'S3BucketHasPublicAccessBlock.yaml', 'S3BucketLogging.yaml', 'APIGWLoggingLevelsDefinedProperly.yaml', 'Route53ARecordAttachedResource.yaml', 'S3KMSEncryptedByDefault.yaml', 'EBSAddedBackup.yaml', 'AutoScallingEnabledELB.yaml', 'AMRClustersNotOpenToInternet.yaml', 'VPCHasRestrictedSG.yaml', 'EncryptedEBSVolumeOnlyConnectedToEC2s.yaml', 'ALBRedirectsHTTPToHTTPS.yaml', 'HTTPNotSendingPasswords.yaml']
2022-04-17 10:34:51,961 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_3
2022-04-17 10:34:51,962 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_21
2022-04-17 10:34:51,962 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_22
2022-04-17 10:34:51,962 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_16
2022-04-17 10:34:51,962 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_17
2022-04-17 10:34:51,962 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_12
2022-04-17 10:34:51,962 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_18
2022-04-17 10:34:51,962 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_11
2022-04-17 10:34:51,963 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_20
2022-04-17 10:34:51,963 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_5
2022-04-17 10:34:51,963 [ThreadPoolEx] [DEBUG]  Running graph check: CKV_AZURE_119
2022-04-17 10:34:51,963 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_2
2022-04-17 10:34:51,963 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_4
2022-04-17 10:34:51,963 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_10
2022-04-17 10:34:51,963 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_19
2022-04-17 10:34:51,964 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_14
2022-04-17 10:34:51,965 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_1
2022-04-17 10:34:51,965 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_7
2022-04-17 10:34:51,965 [ThreadPoolEx] [DEBUG]  Running graph check: CKV_AZURE_23
2022-04-17 10:34:51,965 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_6
2022-04-17 10:34:51,965 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_15
2022-04-17 10:34:51,965 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_8
2022-04-17 10:34:51,965 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_9
2022-04-17 10:34:51,965 [ThreadPoolEx] [DEBUG]  Running graph check: CKV_AZURE_24
2022-04-17 10:34:51,965 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AZURE_13
2022-04-17 10:34:51,965 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_GCP_2
2022-04-17 10:34:51,966 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_GCP_8
2022-04-17 10:34:51,966 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_GCP_7
2022-04-17 10:34:51,966 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_GCP_5
2022-04-17 10:34:51,966 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_GCP_9
2022-04-17 10:34:51,966 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_GCP_6
2022-04-17 10:34:51,966 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_GCP_4
2022-04-17 10:34:51,966 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_GCP_1
2022-04-17 10:34:51,967 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_GCP_3
2022-04-17 10:34:51,967 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_35
2022-04-17 10:34:51,967 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_29
2022-04-17 10:34:51,967 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_27
2022-04-17 10:34:51,967 [ThreadPoolEx] [DEBUG]  Running graph check: CKV_AWS_21
2022-04-17 10:34:51,967 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_30
2022-04-17 10:34:51,967 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_19
2022-04-17 10:34:51,967 [ThreadPoolEx] [DEBUG]  Running graph check: CKV_AWS_20
2022-04-17 10:34:51,967 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_10
2022-04-17 10:34:51,967 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_1
2022-04-17 10:34:51,968 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_28
2022-04-17 10:34:51,968 [ThreadPoolEx] [DEBUG]  Running graph check: CKV_AWS_19
2022-04-17 10:34:51,968 [ThreadPoolEx] [DEBUG]  Running graph check: CKV_AWS_57
2022-04-17 10:34:51,968 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_34
2022-04-17 10:34:51,968 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_32
2022-04-17 10:34:51,969 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_31
2022-04-17 10:34:51,969 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_8
2022-04-17 10:34:51,969 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_11
2022-04-17 10:34:51,969 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_33
2022-04-17 10:34:51,969 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_3
2022-04-17 10:34:51,969 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_22
2022-04-17 10:34:51,970 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_18
2022-04-17 10:34:51,970 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_5
2022-04-17 10:34:51,970 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_16
2022-04-17 10:34:51,970 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_14
2022-04-17 10:34:51,970 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_21
2022-04-17 10:34:51,970 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_6
2022-04-17 10:34:51,970 [ThreadPoolEx] [DEBUG]  Running graph check: CKV_AWS_18
2022-04-17 10:34:51,970 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_4
2022-04-17 10:34:51,970 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_23
2022-04-17 10:34:51,970 [ThreadPoolEx] [DEBUG]  Running graph check: CKV_AWS_145
2022-04-17 10:34:51,971 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_9
2022-04-17 10:34:51,971 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_15
2022-04-17 10:34:51,971 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_7
2022-04-17 10:34:51,971 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_12
2022-04-17 10:34:51,971 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_2
2022-04-17 10:34:51,971 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_20
2022-04-17 10:34:51,971 [ThreadPoolEx] [DEBUG]  Running graph check: CKV2_AWS_36

       _               _              
   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.0.1066 

2022-04-17 10:34:51,974 [MainThread  ] [DEBUG]  Getting exit code for report terraform
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  Getting exit code for report cloudformation
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  Getting exit code for report kubernetes
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  Getting exit code for report serverless
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  Getting exit code for report arm
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  Getting exit code for report terraform_plan
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,975 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  Getting exit code for report helm
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  Getting exit code for report dockerfile
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  Getting exit code for report secrets
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  Getting exit code for report json
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  Getting exit code for report yaml
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  Getting exit code for report github_configuration
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  Getting exit code for report gitlab_configuration
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  Getting exit code for report bitbucket_configuration
2022-04-17 10:34:51,976 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  Getting exit code for report kustomize
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  Getting exit code for report sca_package
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  Getting exit code for report github_actions
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  Getting exit code for report bicep
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  Getting exit code for report openapi
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  Getting exit code for report sca_image
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  In get_exit_code; soft_fail: False, soft_fail_on: None, hard_fail_on: None
2022-04-17 10:34:51,977 [MainThread  ] [DEBUG]  No failed checks, or soft_fail is True and soft_fail_on and hard_fail_on are empty - returning 0

I can also upload render.yaml should you need them.

epodegrid avatar Apr 17 '22 08:04 epodegrid

Hey @epodegrid !

Looking at the logs, I expected to find a parsing error or something similar, but there is no hint of that. Can you share an anonymized version of the file for further research? I believe some validation we wrote is marking this file as invalid by mistake

nimrodkor avatar Apr 18 '22 12:04 nimrodkor

Hi @nimrodkor, here is the generated YAML file. It's not anonymized since I just pulled it from the public helm repository.

---
# Source: prometheus/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
  name: monitoring-k8s
  namespace: default
---
# Source: prometheus/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: monitoring-k8s-rules
  namespace: default
  labels:
    role: prometheus-rulefiles
    prometheus: monitoring-k8s
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
data:
  alertmanager.rules: |
    groups:
    - name: alertmanager.rules
      rules:
      - alert: AlertmanagerConfigInconsistent
        expr: count_values("config_hash", alertmanager_config_hash) BY (service) / ON(service)
          GROUP_LEFT() label_replace(prometheus_operator_alertmanager_spec_replicas, "service",
          "alertmanager-$1", "alertmanager", "(.*)") != 1
        for: 5m
        labels:
          severity: critical
        annotations:
          description: The configuration of the instances of the Alertmanager cluster
            `{{$labels.service}}` are out of sync.
          summary: Alertmanager configurations are inconsistent
      - alert: AlertmanagerDownOrMissing
        expr: label_replace(prometheus_operator_alertmanager_spec_replicas, "job", "alertmanager-$1",
          "alertmanager", "(.*)") / ON(job) GROUP_RIGHT() sum(up) BY (job) != 1
        for: 5m
        labels:
          severity: warning
        annotations:
          description: An unexpected number of Alertmanagers are scraped or Alertmanagers
            disappeared from discovery.
          summary: Alertmanager down or not discovered
      - alert: FailedReload
        expr: alertmanager_config_last_reload_successful == 0
        for: 10m
        labels:
          severity: warning
        annotations:
          description: Reloading Alertmanager's configuration has failed for {{ $labels.namespace
            }}/{{ $labels.pod}}.
          summary: Alertmanager configuration reload has failed
  etcd3.rules: |
    groups:
    - name: etcd.rules
      rules:
      - alert: InsufficientMembers
        expr: count(up{job="etcd"} == 0) > (count(up{job="etcd"}) / 2 - 1)
        for: 3m
        labels:
          severity: critical
          kind: infra
        annotations:
          description: If one more etcd member goes down the cluster will be unavailable
          summary: etcd cluster insufficient members
      - alert: NoLeader
        expr: etcd_server_has_leader{job="etcd"} == 0
        for: 1m
        labels:
          severity: critical
          kind: infra
        annotations:
          description: etcd member {{ $labels.instance }} has no leader
          summary: etcd member has no leader
      - alert: HighNumberOfLeaderChanges
        expr: increase(etcd_server_leader_changes_seen_total{job="etcd"}[1h]) > 3
        labels:
          severity: warning
          kind: infra
        annotations:
          description: etcd instance {{ $labels.instance }} has seen {{ $value }} leader
            changes within the last hour
          summary: a high number of leader changes within the etcd cluster are happening
      - alert: HighNumberOfFailedGRPCRequests
        expr: sum(rate(etcd_grpc_requests_failed_total{job="etcd"}[5m])) BY (grpc_method)
          / sum(rate(etcd_grpc_total{job="etcd"}[5m])) BY (grpc_method) > 0.01
        for: 10m
        labels:
          severity: warning
          kind: infra
        annotations:
          description: '{{ $value }}% of requests for {{ $labels.grpc_method }} failed
            on etcd instance {{ $labels.instance }}'
          summary: a high number of gRPC requests are failing
      - alert: HighNumberOfFailedGRPCRequests
        expr: sum(rate(etcd_grpc_requests_failed_total{job="etcd"}[5m])) BY (grpc_method)
          / sum(rate(etcd_grpc_total{job="etcd"}[5m])) BY (grpc_method) > 0.05
        for: 5m
        labels:
          severity: critical
          kind: infra
        annotations:
          description: '{{ $value }}% of requests for {{ $labels.grpc_method }} failed
            on etcd instance {{ $labels.instance }}'
          summary: a high number of gRPC requests are failing
      - alert: GRPCRequestsSlow
        expr: histogram_quantile(0.99, rate(etcd_grpc_unary_requests_duration_seconds_bucket[5m]))
          > 0.15
        for: 10m
        labels:
          severity: critical
          kind: infra
        annotations:
          description: on etcd instance {{ $labels.instance }} gRPC requests to {{ $labels.grpc_method
            }} are slow
          summary: slow gRPC requests
      - alert: HighNumberOfFailedHTTPRequests
        expr: sum(rate(etcd_http_failed_total{job="etcd"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job="etcd"}[5m]))
          BY (method) > 0.01
        for: 10m
        labels:
          severity: warning
          kind: infra
        annotations:
          description: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd
            instance {{ $labels.instance }}'
          summary: a high number of HTTP requests are failing
      - alert: HighNumberOfFailedHTTPRequests
        expr: sum(rate(etcd_http_failed_total{job="etcd"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job="etcd"}[5m]))
          BY (method) > 0.05
        for: 5m
        labels:
          severity: critical
          kind: infra
        annotations:
          description: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd
            instance {{ $labels.instance }}'
          summary: a high number of HTTP requests are failing
      - alert: HTTPRequestsSlow
        expr: histogram_quantile(0.99, rate(etcd_http_successful_duration_seconds_bucket[5m]))
          > 0.15
        for: 10m
        labels:
          severity: warning
          kind: infra
        annotations:
          description: on etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method
            }} are slow
          summary: slow HTTP requests
      - alert: EtcdMemberCommunicationSlow
        expr: histogram_quantile(0.99, rate(etcd_network_member_round_trip_time_seconds_bucket[5m]))
          > 0.15
        for: 10m
        labels:
          severity: warning
          kind: infra
        annotations:
          description: etcd instance {{ $labels.instance }} member communication with
            {{ $labels.To }} is slow
          summary: etcd member communication is slow
      - alert: HighNumberOfFailedProposals
        expr: increase(etcd_server_proposals_failed_total{job="etcd"}[1h]) > 5
        labels:
          severity: warning
          kind: infra
        annotations:
          description: etcd instance {{ $labels.instance }} has seen {{ $value }} proposal
            failures within the last hour
          summary: a high number of proposals within the etcd cluster are failing
      - alert: HighFsyncDurations
        expr: histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket[5m]))
          > 0.5
        for: 10m
        labels:
          severity: warning
          kind: infra
        annotations:
          description: etcd instance {{ $labels.instance }} fync durations are high
          summary: high fsync durations
      - alert: HighCommitDurations
        expr: histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket[5m]))
          > 0.25
        for: 10m
        labels:
          severity: warning
          kind: infra
        annotations:
          description: etcd instance {{ $labels.instance }} commit durations are high
          summary: high commit durations
  general.rules: "groups:\n- name: pods.rules\n  rules:\n  - alert: PodsDown\n    expr:
    kube_pod_info{created_by_kind!=\"Job\"} == 1 and ON(pod) kube_pod_status_ready{condition=\"false\"}
    == 1 and on(pod) kube_pod_container_status_waiting > 0\n    for: 5m\n    labels:\n
    \     severity: critical                \n    annotations:\n      description: '{{
    $labels.pod }} from {{ $labels.namespace }} is red.'\n      summary: Pods are down\n
    \ - alert: JobsFailed\n    expr: kube_pod_info{created_by_kind=\"Job\"} == 1 and
    ON(pod) (kube_pod_status_phase{phase=\"Failed\"} == 1 or kube_pod_status_phase{phase=\"Unknown\"}
    == 1)\n    for: 1s\n    labels:\n      severity: critical\n    annotations:\n      description:
    '{{ $labels.pod }} from {{ $labels.namespace }} has failed.'\n      summary: Jobs
    failed\n- name: general.rules\n  rules:\n  - alert: TargetDown\n    expr: 100 *
    (count(up == 0) BY (job) / count(up) BY (job)) > 10\n    for: 10m\n    labels:\n
    \     severity: warning\n    annotations:\n      description: '{{ $value }}% or
    more of {{ $labels.job }} targets are down.'\n      summary: Targets are down\n
    \ - alert: DeadMansSwitch\n    expr: vector(1)\n    labels:\n      severity: none\n
    \   annotations:\n      description: This is a DeadMansSwitch meant to ensure that
    the entire Alerting\n        pipeline is functional.\n      summary: Alerting DeadMansSwitch\n
    \ - alert: TooManyOpenFileDescriptors\n    expr: 100 * (process_open_fds / process_max_fds)
    > 95\n    for: 10m\n    labels:\n      severity: critical\n    annotations:\n      description:
    '{{ $labels.job }}: {{ $labels.namespace }}/{{ $labels.pod }} ({{\n        $labels.instance
    }}) is using {{ $value }}% of the available file/socket descriptors.'\n      summary:
    too many open file descriptors\n  - record: instance:fd_utilization\n    expr: process_open_fds
    / process_max_fds\n  - alert: FdExhaustionClose\n    expr: predict_linear(instance:fd_utilization[1h],
    3600 * 4) > 1\n    for: 10m\n    labels:\n      severity: warning\n    annotations:\n
    \     description: '{{ $labels.job }}: {{ $labels.namespace }}/{{ $labels.pod }}
    ({{\n        $labels.instance }}) instance will exhaust in file/socket descriptors
    soon'\n      summary: file descriptors soon exhausted\n  - alert: FdExhaustionClose\n
    \   expr: predict_linear(instance:fd_utilization[10m], 3600) > 1\n    for: 10m\n
    \   labels:\n      severity: critical\n    annotations:\n      description: '{{
    $labels.job }}: {{ $labels.namespace }}/{{ $labels.pod }} ({{\n        $labels.instance
    }}) instance will exhaust in file/socket descriptors soon'\n      summary: file
    descriptors soon exhausted\n"
  job.rules: |
    groups:
    - name: job.rules
      rules:
      - alert: CronJobRunning
        expr: time() -kube_cronjob_next_schedule_time > 3600
        for: 1h
        labels:
          severity: warning
        annotations:
          description: CronJob {{$labels.namespaces}}/{{$labels.cronjob}} is taking more than 1h to complete
          summary: CronJob didn't finish after 1h
  
      - alert: JobCompletion
        expr: kube_job_spec_completions - kube_job_status_succeeded  > 0
        for: 1h
        labels:
          severity: warning
        annotations:
          description: Job completion is taking more than 1h to complete
            cronjob {{$labels.namespaces}}/{{$labels.job}}
          summary: Job {{$labels.job}} didn't finish to complete after 1h
  
      - alert: JobFailed
        expr: kube_job_status_failed  > 0
        for: 1h
        labels:
          severity: warning
        annotations:
          description: Job {{$labels.namespaces}}/{{$labels.job}} failed to complete
          summary: Job failed
  kube-apiserver.rules: |
    groups:
    - name: kube-apiserver.rules
      rules:
      - alert: K8SApiServerLatency
        expr: histogram_quantile(0.99, sum(apiserver_request_latencies_bucket{subresource!="log",verb!~"CONNECT|WATCHLIST|WATCH|PROXY"})
          WITHOUT (instance, resource)) / 1e+06 > 10
        for: 10m
        labels:
          severity: warning
          kind: infra
        annotations:
          description: 99th percentile Latency for {{ $labels.verb }} requests to the
            kube-apiserver is higher than 10s.
          summary: Kubernetes apiserver latency is high
  kube-state-metrics.rules: "groups:\n- name: kube-state-metrics.rules\n  rules:\n  -
    alert: DeploymentGenerationMismatch\n    expr: kube_deployment_status_observed_generation
    != kube_deployment_metadata_generation\n    for: 15m\n    labels:\n      severity:
    warning\n    annotations:\n      description: Observed deployment generation does
    not match expected one for\n        deployment {{$labels.namespaces}}/{{$labels.deployment}}\n
    \     summary: Deployment is outdated\n  - alert: DeploymentReplicasNotUpdated\n
    \   expr: ((kube_deployment_status_replicas_updated != kube_deployment_spec_replicas)\n
    \     or (kube_deployment_status_replicas_available != kube_deployment_spec_replicas))\n
    \     unless (kube_deployment_spec_paused == 1)\n    for: 15m\n    labels:\n      severity:
    warning\n    annotations:\n      description: Replicas are not updated and available
    for deployment {{$labels.namespaces}}/{{$labels.deployment}}\n      summary: Deployment
    replicas are outdated\n  - alert: DaemonSetRolloutStuck\n    expr: kube_daemonset_status_number_ready
    / kube_daemonset_status_desired_number_scheduled\n      * 100 < 100\n    for: 15m\n
    \   labels:\n      severity: warning\n    annotations:\n      description: Only
    {{$value}}% of desired pods scheduled and ready for daemon\n        set {{$labels.namespaces}}/{{$labels.daemonset}}\n
    \     summary: DaemonSet is missing pods\n  - alert: K8SDaemonSetsNotScheduled\n
    \   expr: kube_daemonset_status_desired_number_scheduled - kube_daemonset_status_current_number_scheduled\n
    \     > 0\n    for: 10m\n    labels:\n      severity: warning\n    annotations:\n
    \     description: A number of daemonsets are not scheduled.\n      summary: Daemonsets
    are not scheduled correctly\n  - alert: DaemonSetsMissScheduled\n    expr: kube_daemonset_status_number_misscheduled
    > 0\n    for: 10m\n    labels:\n      severity: warning\n    annotations:\n      description:
    A number of daemonsets are running where they are not supposed\n        to run.\n
    \     summary: Daemonsets are not scheduled correctly\n  - alert: PodFrequentlyRestarting\n
    \   expr: increase(kube_pod_container_status_restarts_total[1h]) > 5\n    for: 10m\n
    \   labels:\n      severity: warning                \n    annotations:\n      description:
    Pod {{$labels.namespaces}}/{{$labels.pod}} is was restarted {{$value}}\n        times
    within the last hour\n      summary: Pod is restarting frequently\n"
  kubelet.rules: |
    groups:
    - name: kubelet.rules
      rules:
      - alert: K8SNodeNotReady
        expr: kube_node_status_condition{condition="Ready",status="true"} == 0
        for: 5m
        labels:
          severity: warning
          kind: infra
        annotations:
          description: The Kubelet on {{ $labels.node }} has not checked in with the API,
            or has set itself to NotReady, for more than an hour
          summary: Node status is NotReady
      - alert: K8SManyNodesNotReady
        expr: count(kube_node_status_condition{condition="Ready",status="true"} == 0)
          > 1 and (count(kube_node_status_condition{condition="Ready",status="true"} ==
          0) / count(kube_node_status_condition{condition="Ready",status="true"})) > 0.2
        for: 1m
        labels:
          severity: critical
          kind: infra
        annotations:
          description: '{{ $value }} Kubernetes nodes (more than 10% are in the NotReady
            state).'
          summary: Many Kubernetes nodes are Not Ready
      - alert: K8SKubeletDown
        expr: count(up{job="kubelet"} == 0) / count(up{job="kubelet"}) > 0.03
        for: 5m
        labels:
          severity: warning
          kind: infra
        annotations:
          description: Prometheus failed to scrape {{ $value }}% of kubelets.
          summary: Many Kubelets cannot be scraped
      - alert: K8SKubeletDown
        expr: absent(up{job="kubelet"} == 1) or count(up{job="kubelet"} == 0) / count(up{job="kubelet"})
          > 0.1
        for: 5m
        labels:
          severity: critical
          kind: infra
        annotations:
          description: Prometheus failed to scrape {{ $value }}% of kubelets, or all Kubelets
            have disappeared from service discovery.
          summary: Many Kubelets cannot be scraped
      - alert: K8SKubeletTooManyPods
        expr: kubelet_running_pod_count > 100
        labels:
          severity: warning
          kind: infra
        annotations:
          description: Kubelet {{$labels.instance}} is running {{$value}} pods, close
            to the limit of 110
          summary: Kubelet is close to pod limit
  kubernetes.rules: |
    groups:
    - name: kubernetes.rules
      rules:
      - record: cluster_namespace_controller_pod_container:spec_memory_limit_bytes
        expr: sum(label_replace(container_spec_memory_limit_bytes{container_name!=""},
          "controller", "$1", "pod_name", "^(.*)-[a-z0-9]+")) BY (cluster, namespace,
          controller, pod_name, container_name)
      - record: cluster_namespace_controller_pod_container:spec_cpu_shares
        expr: sum(label_replace(container_spec_cpu_shares{container_name!=""}, "controller",
          "$1", "pod_name", "^(.*)-[a-z0-9]+")) BY (cluster, namespace, controller, pod_name,
          container_name)
      - record: cluster_namespace_controller_pod_container:cpu_usage:rate
        expr: sum(label_replace(irate(container_cpu_usage_seconds_total{container_name!=""}[5m]),
          "controller", "$1", "pod_name", "^(.*)-[a-z0-9]+")) BY (cluster, namespace,
          controller, pod_name, container_name)
      - record: cluster_namespace_controller_pod_container:memory_usage:bytes
        expr: sum(label_replace(container_memory_usage_bytes{container_name!=""}, "controller",
          "$1", "pod_name", "^(.*)-[a-z0-9]+")) BY (cluster, namespace, controller, pod_name,
          container_name)
      - record: cluster_namespace_controller_pod_container:memory_working_set:bytes
        expr: sum(label_replace(container_memory_working_set_bytes{container_name!=""},
          "controller", "$1", "pod_name", "^(.*)-[a-z0-9]+")) BY (cluster, namespace,
          controller, pod_name, container_name)
      - record: cluster_namespace_controller_pod_container:memory_rss:bytes
        expr: sum(label_replace(container_memory_rss{container_name!=""}, "controller",
          "$1", "pod_name", "^(.*)-[a-z0-9]+")) BY (cluster, namespace, controller, pod_name,
          container_name)
      - record: cluster_namespace_controller_pod_container:memory_cache:bytes
        expr: sum(label_replace(container_memory_cache{container_name!=""}, "controller",
          "$1", "pod_name", "^(.*)-[a-z0-9]+")) BY (cluster, namespace, controller, pod_name,
          container_name)
      - record: cluster_namespace_controller_pod_container:disk_usage:bytes
        expr: sum(label_replace(container_disk_usage_bytes{container_name!=""}, "controller",
          "$1", "pod_name", "^(.*)-[a-z0-9]+")) BY (cluster, namespace, controller, pod_name,
          container_name)
      - record: cluster_namespace_controller_pod_container:memory_pagefaults:rate
        expr: sum(label_replace(irate(container_memory_failures_total{container_name!=""}[5m]),
          "controller", "$1", "pod_name", "^(.*)-[a-z0-9]+")) BY (cluster, namespace,
          controller, pod_name, container_name, scope, type)
      - record: cluster_namespace_controller_pod_container:memory_oom:rate
        expr: sum(label_replace(irate(container_memory_failcnt{container_name!=""}[5m]),
          "controller", "$1", "pod_name", "^(.*)-[a-z0-9]+")) BY (cluster, namespace,
          controller, pod_name, container_name, scope, type)
      - record: cluster:memory_allocation:percent
        expr: 100 * sum(container_spec_memory_limit_bytes{pod_name!=""}) BY (cluster)
          / sum(machine_memory_bytes) BY (cluster)
      - record: cluster:memory_used:percent
        expr: 100 * sum(container_memory_usage_bytes{pod_name!=""}) BY (cluster) / sum(machine_memory_bytes)
          BY (cluster)
      - record: cluster:cpu_allocation:percent
        expr: 100 * sum(container_spec_cpu_shares{pod_name!=""}) BY (cluster) / sum(container_spec_cpu_shares{id="/"}
          * ON(cluster, instance) machine_cpu_cores) BY (cluster)
      - record: cluster:node_cpu_use:percent
        expr: 100 * sum(rate(node_cpu{mode!="idle"}[5m])) BY (cluster) / sum(machine_cpu_cores)
          BY (cluster)
      - record: cluster_resource_verb:apiserver_latency:quantile_seconds
        expr: histogram_quantile(0.99, sum(apiserver_request_latencies_bucket) BY (le,
          cluster, job, resource, verb)) / 1e+06
        labels:
          quantile: "0.99"
      - record: cluster_resource_verb:apiserver_latency:quantile_seconds
        expr: histogram_quantile(0.9, sum(apiserver_request_latencies_bucket) BY (le,
          cluster, job, resource, verb)) / 1e+06
        labels:
          quantile: "0.9"
      - record: cluster_resource_verb:apiserver_latency:quantile_seconds
        expr: histogram_quantile(0.5, sum(apiserver_request_latencies_bucket) BY (le,
          cluster, job, resource, verb)) / 1e+06
        labels:
          quantile: "0.5"
      - record: cluster:scheduler_e2e_scheduling_latency:quantile_seconds
        expr: histogram_quantile(0.99, sum(scheduler_e2e_scheduling_latency_microseconds_bucket)
          BY (le, cluster)) / 1e+06
        labels:
          quantile: "0.99"
      - record: cluster:scheduler_e2e_scheduling_latency:quantile_seconds
        expr: histogram_quantile(0.9, sum(scheduler_e2e_scheduling_latency_microseconds_bucket)
          BY (le, cluster)) / 1e+06
        labels:
          quantile: "0.9"
      - record: cluster:scheduler_e2e_scheduling_latency:quantile_seconds
        expr: histogram_quantile(0.5, sum(scheduler_e2e_scheduling_latency_microseconds_bucket)
          BY (le, cluster)) / 1e+06
        labels:
          quantile: "0.5"
      - record: cluster:scheduler_scheduling_algorithm_latency:quantile_seconds
        expr: histogram_quantile(0.99, sum(scheduler_scheduling_algorithm_latency_microseconds_bucket)
          BY (le, cluster)) / 1e+06
        labels:
          quantile: "0.99"
      - record: cluster:scheduler_scheduling_algorithm_latency:quantile_seconds
        expr: histogram_quantile(0.9, sum(scheduler_scheduling_algorithm_latency_microseconds_bucket)
          BY (le, cluster)) / 1e+06
        labels:
          quantile: "0.9"
      - record: cluster:scheduler_scheduling_algorithm_latency:quantile_seconds
        expr: histogram_quantile(0.5, sum(scheduler_scheduling_algorithm_latency_microseconds_bucket)
          BY (le, cluster)) / 1e+06
        labels:
          quantile: "0.5"
      - record: cluster:scheduler_binding_latency:quantile_seconds
        expr: histogram_quantile(0.99, sum(scheduler_binding_latency_microseconds_bucket)
          BY (le, cluster)) / 1e+06
        labels:
          quantile: "0.99"
      - record: cluster:scheduler_binding_latency:quantile_seconds
        expr: histogram_quantile(0.9, sum(scheduler_binding_latency_microseconds_bucket)
          BY (le, cluster)) / 1e+06
        labels:
          quantile: "0.9"
      - record: cluster:scheduler_binding_latency:quantile_seconds
        expr: histogram_quantile(0.5, sum(scheduler_binding_latency_microseconds_bucket)
          BY (le, cluster)) / 1e+06
        labels:
          quantile: "0.5"
  node.rules: |
    groups:
    - name: node.rules
      rules:
      - alert: NodeExporterDown
        expr: absent(up{job="node-exporter"} == 1)
        for: 10m
        labels:
          severity: warning
          kind: infra
        annotations:
          description: Prometheus could not scrape a node-exporter for more than 10m,
            or node-exporters have disappeared from discovery.
          summary: node-exporter cannot be scraped
      - alert: K8SNodeOutOfDisk
        expr: kube_node_status_condition{condition="OutOfDisk",status="true"} == 1
        labels:
          service: k8s
          severity: critical
          kind: infra
        annotations:
          description: '{{ $labels.node }} has run out of disk space.'
          summary: Node ran out of disk space.
      - alert: K8SNodeMemoryPressure
        expr: kube_node_status_condition{condition="MemoryPressure",status="true"} ==
          1
        labels:
          service: k8s
          severity: warning
          kind: infra
        annotations:
          description: '{{ $labels.node }} is under memory pressure.'
          summary: Node is under memory pressure.
      - alert: K8SNodeDiskPressure
        expr: kube_node_status_condition{condition="DiskPressure",status="true"} == 1
        labels:
          service: k8s
          severity: warning
          kind: infra
        annotations:
          description: '{{ $labels.node }} is under disk pressure.'
          summary: Node is under disk pressure.
      - alert: NodeCPUUsage
        expr: (100 - (avg by (instance) (irate(node_cpu{job="node-exporter",mode="idle"}[5m])) * 100)) > 90
        for: 30m
        labels:
          severity: warning
          kind: infra
        annotations:
          summary: "{{$labels.instance}}: High CPU usage detected"
          description: "{{$labels.instance}}: CPU usage is above 90% (current value is: {{ $value }})"
      - alert: NodeMemoryUsage
        expr: (((node_memory_MemTotal-node_memory_MemFree-node_memory_Cached)/(node_memory_MemTotal)*100)) > 90
        for: 30m
        labels:
          severity: warning
          kind: infra
        annotations:
          summary: "{{$labels.instance}}: High memory usage detected"
          description: "{{$labels.instance}}: Memory usage is above 90% (current value is: {{ $value }})"
  prometheus.rules: |
    groups:
    - name: prometheus.rules
      rules:
      - alert: FailedReload
        expr: prometheus_config_last_reload_successful == 0
        for: 10m
        labels:
          severity: warning
        annotations:
          description: Reloading Prometheus' configuration has failed for {{ $labels.namespace
            }}/{{ $labels.pod}}.
          summary: Prometheus configuration reload has failed
---
# Source: prometheus/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  labels:
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
  name: monitoring-k8s
rules:
- apiGroups: [""]
  resources:
  - nodes/metrics
  verbs: ["get"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
- apiGroups: [""]
  resources:
  - nodes
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
  - configmaps
  verbs: ["get"]
---
# Source: prometheus/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  labels:
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
  name: monitoring-k8s
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: monitoring-k8s
subjects:
- kind: ServiceAccount
  name: monitoring-k8s
  namespace: default
---
# Source: prometheus/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  labels:
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
  name: monitoring-k8s
  namespace: default
rules:
- apiGroups: [""]
  resources:
  - nodes
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
  - configmaps
  verbs: ["get"]
---
# Source: prometheus/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  labels:
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
  name: monitoring-kube-system-k8s
  namespace: kube-system
rules:
- apiGroups: [""]
  resources:
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
---
# Source: prometheus/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  labels:
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
  name: monitoring-default-k8s
  namespace: default
rules:
- apiGroups: [""]
  resources:
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
---
# Source: prometheus/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  labels:
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
  name: monitoring-k8s
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: monitoring-k8s
subjects:
- kind: ServiceAccount
  name: monitoring-k8s
  namespace: default
---
# Source: prometheus/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  labels:
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
  name: monitoring-kube-system-k8s
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: monitoring-kube-system-k8s
subjects:
- kind: ServiceAccount
  name: monitoring-k8s
  namespace: default
---
# Source: prometheus/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  labels:
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
  name: monitoring-default-k8s
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: monitoring-default-k8s
subjects:
- kind: ServiceAccount
  name: monitoring-k8s
  namespace: default
---
# Source: prometheus/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    prometheus: monitoring-k8s
    expose: "true"
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
  name: prometheus-k8s
spec:
  ports:
  - name: web
    port: 9090
    protocol: TCP
    targetPort: web
  selector:
    prometheus: k8s
    app: prometheus
---
# Source: prometheus/templates/prometheus.yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  annotations:
    configmap.fabric8.io/update-on-change: monitoring-k8s-rules
  # The name is used as label on the Pod made by PO
  name: k8s
  labels:
  # This label is imposed on the Stateful Set made by PO
    prometheus: monitoring-k8s
    app: prometheus
    group: com.stakater.platform
    provider: stakater
    version: "2.2.0-rc.0"
    chart: "prometheus-1.0.32"
    release: "release-name"
    heritage: "Helm"
spec:
  replicas: 2
  version: v2.2.0-rc.0
  externalUrl: http://127.0.0.1:9090
  serviceAccountName: monitoring-k8s
  serviceMonitorSelector:
    matchExpressions:
    - {key: k8s-app, operator: Exists}
  ruleSelector:
    matchLabels:
      prometheus: monitoring-k8s
      app: prometheus
      group: com.stakater.platform
      provider: stakater
  retention: 168h
  storage:
    class: ssd
    selector:
    resources:
    volumeClaimTemplate:
      metadata:
        annotations:
          annotation1: monitoring
      spec:
        storageClassName: ssd
        resources:
          requests:
            storage: 40Gi
  alerting:
    alertmanagers:
    - namespace: monitoring
      name: alertmanager-main
      port: web

The entire procedure I followed was:

  1. helm pull prometheus --repo https://stakater.github.io/stakater-charts --untar
  2. helm template prometheus > render_prometheus.yaml
  3. checkov -f render_prometheus.yaml

While I was looking into this, I noticed that checkov does work with many charts that were rendered in a similar manner. For eg: nginx-ingress works absolutely fine with checkov. So it might be the case that some charts are not rendered properly or have some errors. On a side note, for the above-mentioned chart (prometheus), when I convert it into JSON using yq ea '.' -o=json render.yaml > test.json and run checkov on this, it works! Note that the json file generated in this case has syntax errors but checkov works on this. Quite surprising. Maybe you have a better idea about this.

Hope this helps!

epodegrid avatar Apr 19 '22 02:04 epodegrid

@epodegrid @nimrodkor I think the problems might be related to the curly brackets ({{ ) in the generated manifests. For example here:

        annotations:
          description: The configuration of the instances of the Alertmanager cluster
            `{{$labels.service}}` are out of sync.

There is an open issue that might be related to this problem: #2660 Since this issue only concerns yaml the observation that json works would be an indication that this issue concerns the same problem.

tom1299 avatar Apr 19 '22 08:04 tom1299

@tom1299 this is some good observation. It could be the case of YAML curly brackets, but I still don't understand why checkov would work on a wrongly syntaxed JSON document. I converted the same YAML file into correct JSON syntax using yq ea '[.]' -o=json render.yaml > test.json and checkov will not work on this. But when the initial square brackets are missing in the JSON document (using the command mentioned in the previous comment), making the syntax wrong, checkov produces an output with a good number of checks. I find this quite interesting yet confusing.

epodegrid avatar Apr 19 '22 12:04 epodegrid

Thanks for the examples and insight @tom1299 & @epodegrid ! It was right on - check this line out: https://github.com/bridgecrewio/checkov/blob/d31197f6ee37e121f53d437f7dc8a30ea0da4701/checkov/kubernetes/parser/k8_yaml.py#L32

They were added in #2515 . @metahertz Can you take a look and perhaps help figure out a better way to exclude raw helm files from k8s runner?

nimrodkor avatar Apr 19 '22 14:04 nimrodkor

@nimrodkor @tom1299 @metahertz I can confirm that this line are affecting not only Helm templates but also k8s manifests rendering Hashicorp Vault injector snippets:

This is something weird as I've been able to analyze kubernetes manifests containing injector lines using '{{'. Working code excerpt from a manifest:

      template:
        metadata:
          annotations:
            seccomp.security.alpha.kubernetes.io/pod: "runtime/default"
            vault.hashicorp.com/agent-init-first: "true"
            vault.hashicorp.com/agent-pre-populate-only: "true"
            vault.hashicorp.com/agent-inject: "true"
            vault.hashicorp.com/auth-type: "gcp"
            vault.hashicorp.com/auth-config-type: "iam"
            vault.hashicorp.com/auth-config-service-account: "< OMITTED >"
            vault.hashicorp.com/role: "db"
            vault.hashicorp.com/agent-inject-secret-dump-config: "db/mysql/foo"
            vault.hashicorp.com/agent-inject-file-dump-config: "mysqldump.cnf"
            vault.hashicorp.com/agent-inject-template-dump-config: |
              {{- with secret "db/mysql/foo" -}}
              [mysqldump]
              host=< OMMITED >
              user={{ .Data.data.user }}
              password={{ .Data.data.pass}}
              {{- end }}
            vault.hashicorp.com/agent-inject-secret-bucket-config: "db/mysql/foo"
            vault.hashicorp.com/agent-inject-file-bucket-config: "bucket.cnf"
            vault.hashicorp.com/agent-inject-template-bucket-config: |
              {{- with secret "db/mysql/foo" -}}
              export BUCKET="{{ .Data.data.data_backups_bucket }}"
              {{- end }}

I've read https://github.com/bridgecrewio/checkov/issues/2660#issuecomment-1091101886 but in this case file is parsed and is not helm template

Can't make a suggestion but I hope it helps somehow

lpzdvd-packlink avatar May 20 '22 09:05 lpzdvd-packlink

@lpzdvd-packlink Interesting. Thanks for the info. I think both Vault and Helm use the Golang template package thus experiencing the same kind of problems with its syntax. It is wired indeed that that example you have given is scanned correctly. Maybe you could provide the whole manifest and the log if possible ?

tom1299 avatar May 24 '22 05:05 tom1299

Also Prometheus rules are using '{{'.

maver1ck avatar Jul 13 '22 06:07 maver1ck

Same issue here. I used a helm template with an own chart like described at [1]. The resulting ressource output contains also some non replaced values and the output will be left empty and the return code is always zero.

To reproduce this behavior you have to add a string value "{{" to a value in a random kubernetes ressource yaml. If you remove the string you got the expected scanning result with a return value of 1 if an error would be found.

Scan with "{{"

[ kubernetes framework ]: 100%|████████████████████|[1/1], Current File Scanned=bla.yaml

       _               _              
   ___| |__   ___  ___| | _______   __
  / __| '_ \ / _ \/ __| |/ / _ \ \ / /
 | (__| | | |  __/ (__|   < (_) \ V / 
  \___|_| |_|\___|\___|_|\_\___/ \_/  
                                      
By bridgecrew.io | version: 2.1.121 

marge: $ echo $?
0

Scan without "{{"

marge: $ checkov -f bla.yaml --framework kubernetes | grep Check | wc -l
97

It would be nice if the exit code would reflect that case and it would be perfect checkov would throw an error message.

[1] https://www.checkov.io/7.Scan%20Examples/Helm.html

sagiru avatar Aug 12 '22 11:08 sagiru

I am experiencing same issue as reported here, the yaml file has {{variables}} in it. As an intermediate solution I am using the following command: cat non-working.yml | sed "s/{{/[[/" > working.yml basically it converts the {{ to [[ and then the checkov will be able to scan the file again.

Jan-Paul avatar Aug 18 '22 15:08 Jan-Paul

Thanks for contributing to Checkov! We've automatically marked this issue as stale to keep our issues list tidy, because it has not had any activity for 6 months. It will be closed in 14 days if no further activity occurs. Commenting on this issue will remove the stale tag. If you want to talk through the issue or help us understand the priority and context, feel free to add a comment or join us in the Checkov slack channel at https://slack.bridgecrew.io Thanks!

stale[bot] avatar Feb 18 '23 21:02 stale[bot]

@epodegrid Is it fixed ?

maver1ck avatar Feb 19 '23 08:02 maver1ck

Thanks for contributing to Checkov! We've automatically marked this issue as stale to keep our issues list tidy, because it has not had any activity for 6 months. It will be closed in 14 days if no further activity occurs. Commenting on this issue will remove the stale tag. If you want to talk through the issue or help us understand the priority and context, feel free to add a comment or join us in the Checkov slack channel at https://slack.bridgecrew.io Thanks!

stale[bot] avatar Aug 18 '23 19:08 stale[bot]

Closing issue due to inactivity. If you feel this is in error, please re-open, or reach out to the community via slack: https://slack.bridgecrew.io Thanks!

stale[bot] avatar Sep 05 '23 00:09 stale[bot]