amazon-ecs-exec-checker
amazon-ecs-exec-checker copied to clipboard
jq: error (at <stdin>:173): Cannot iterate over null (null)
When running the checker, I get the following jq error:
-------------------------------------------------------------
Prerequisites for check-ecs-exec.sh v0.7
-------------------------------------------------------------
jq | OK (/opt/homebrew/bin/jq)
AWS CLI | OK (/opt/homebrew/bin/aws)
-------------------------------------------------------------
Prerequisites for the AWS CLI to use ECS Exec
-------------------------------------------------------------
AWS CLI Version | OK (aws-cli/2.15.17 Python/3.11.7 Darwin/23.0.0 source/arm64 prompt/off)
Session Manager Plugin | OK (1.2.553.0)
-------------------------------------------------------------
Checks on ECS task and other resources
-------------------------------------------------------------
Region : eu-central-1
Cluster: REDACTED
Task : REDACTED
-------------------------------------------------------------
Cluster Configuration | Audit Logging Not Configured
Can I ExecuteCommand? | arn:aws:iam::xxxxxxxxxxxxx:user/[email protected]
ecs:ExecuteCommand: allowed
ssm:StartSession denied?: allowed
Task Status | RUNNING
Launch Type | Fargate
Platform Version | 1.4.0
Exec Enabled for Task | OK
Container-Level Checks |
----------
Managed Agent Status
----------
jq: error (at <stdin>:173): Cannot iterate over null (null)
I found out that not all containers have a managedAgent property. I was able to fix it by changing line 422 to
agentsStatus=$(echo "${describedTaskJson}" | jq -r ".tasks[0].containers[] | (.managedAgents // [])[].lastStatus // \"FallbackValue\"")
This is of course only a quick fix. The underlying issue is that we have AWS GuardDuty enabled. GuardDuty injects a container into each task but those GuardDuty containers do not have a managedAgent.
This is how the container comes back after describing it:
{
"containerArn": "arn:aws:ecs:eu-central-1:xxxxxxxxx:container/xxx-cluster-xxx/xxxxx/9efcbebb-1204-4212-84fa-1471bcadbf8c",
"taskArn": "arn:aws:ecs:eu-central-1:xxxxxxxxx:task/xxxx-cluster-xxx/xxx",
"name": "aws-guardduty-agent-GAhgQ",
"imageDigest": "sha256:9f8cd438fb66f62d09bfc641286439f7ed5177988a314a6021ef4ff880642e68",
"runtimeId": "c9103216b805432497d68c0190237d44-4043820195",
"lastStatus": "RUNNING",
"networkBindings": [
],
"networkInterfaces": [
{
"attachmentId": "c88ab07b-c263-419c-ba64-adea5c51eb07",
"privateIpv4Address": "10.10.4.210"
}
],
"healthStatus": "UNKNOWN"
},
I found that not all containers have a managedAgent property. I was able to fix it by changing line 422 to
agentsStatus=$(echo "${describedTaskJson}" | jq -r ".tasks[0].containers[] | select(.managedAgents != null) | .managedAgents[].lastStatus")