quota-monitor-for-aws
quota-monitor-for-aws copied to clipboard
NoSuchResourceException
Hello,
We have implemented the aws-limit-monitor and its spokes in a lot of our accounts. But the Lambda LimitMonitorFunction has some problems. It is throwing NoSuchResourceException and keeps running for a looooong time. I have set the logging level to DEBUG and these logs are repeated in CloudWatch over and over.
| 2020-08-10T16:17:56.785+02:00 | START RequestId: 306fe37b-5bf2-4915-8402-dd0f07c03234 Version: $LATEST
| 2020-08-10T16:17:56.789+02:00 | 2020-08-10T14:17:56.789Z 306fe37b-5bf2-4915-8402-dd0f07c03234 INFO [DEBUG]Received event: { "version": "0", "id": "61c3ab5c-ce1e-a92b-ca1c-b3071460314e", "detail-type": "Scheduled Event", "source": "aws.events", "account": "451413662958", "time": "2020-08-10T14:17:31Z", "region": "us-east-1", "resources": [ "arn:aws:events:us-east-1:451413662958:rule/aws-limit-monitor-spoke-limitCh-LimitCheckSchedule-1393J5ZZR9EQZ" ], "detail": {} }
| 2020-08-10T16:17:59.639+02:00 | 2020-08-10T14:17:59.639Z 306fe37b-5bf2-4915-8402-dd0f07c03234 INFO [ERROR]NoSuchResourceException: The request failed because the specified service does not exist.
| 2020-08-10T16:18:45.604+02:00 | 2020-08-10T14:18:45.603Z 306fe37b-5bf2-4915-8402-dd0f07c03234 INFO [DEBUG]Running On-Demand Standard (A, C, D, H, I, M, R, T, Z) instances
| 2020-08-10T16:18:53.823+02:00 | END RequestId: 306fe37b-5bf2-4915-8402-dd0f07c03234
| 2020-08-10T16:18:53.823+02:00 | REPORT RequestId: 306fe37b-5bf2-4915-8402-dd0f07c03234 Duration: 57034.52 ms Billed Duration: 57100 ms Memory Size: 128 MB Max Memory Used: 110 MB
We are getting notifications to Slack. But this is adding unnecessary costs having the Lambda run for 57 seconds multiple times per hour.
Sorry if this is a user error from my side. But (I think) I have followed the guide correctly while setting it up.
It's not just you. I'm trying this template for the first time. I am seeing the same error.
The error message in the logs should not affect the working of the solution. This message is in logs as Service Quotas is not supported in all regions. We will clean up the logging in our next release to avoid confusion. If you want to avoid running the lambda function multiple times then you can update the corresponding CloudWatch event rule to change the frequency of lambda executions.
closing due to inactivity