serverless-api-gateway-caching
serverless-api-gateway-caching copied to clipboard
Enable/disable option not working for shared API GW
When using shared API GW, the global configuration apiGatewayCaching.enable and endpoint configuration caching.enable doesn't work as expected. As long as the cache is enabled for shared API GW, it will be enabled for all individual endpoints in it.
Wondering is those two options designed only for dedicated API GWs? Thanks
Hi @GaaraZhu,
Thanks for your question.
You can have endpoints split across multiple serverless projects, but all pointing to the same API Gateway. This documentation shows how you can reference an already existing API Gateway within a serverless project.
When using this plugin, it is expected that you would apply it to the project where your API Gateway configuration lives, so that it can enable/disable caching across the stage.
Any endpoints that are deployed via other projects that reference this API Gateway would, by default, inherit their caching settings from the configuration in the main API Gateway project, even when they don't themselves use the caching plugin.
When you add the caching plugin to these endpoint projects, you would need to specify apiGatewayIsShared: true so that it doesn't apply this project's specific settings to the main API Gateway.
Here's an example of how you might use it:
Main API Gateway serverless.yml:
plugins:
- serverless-api-gateway-caching
custom:
# all endpoints deployed to this gateway inherit these settings by default
apiGatewayCaching:
enabled: false # all endpoints by default have caching disabled
Another project's serverless.yml which only deploys endpoints that reference the main API Gateway:
plugins:
- serverless-api-gateway-caching
custom:
apiGatewayCaching:
# this enables caching for endpoints in this project (each endpoint must also set caching: enabled to true)
enabled: true
# this makes sure the settings on the Main API Gateway are not changed, only the endpoints would be updated
apiGatewayIsShared: true
functions:
# caching disabled, it must be explicitly enabled
createUser:
handler: createUser.handle
events:
- http:
path: /users
method: post
getUsers:
handler: getUsers.handle
events:
- http:
path: /users
method: get
caching:
enabled: true # enables caching for this endpoint
I hope this makes sense. If your use case is different to the above, please provide some more detail about your configuration.
Thanks @DianaIonita for such a detailed reply. This is very helpful. Previously we only applied the plugin in the endpoint config not the shared API GW. I'll have another try and let you know whether it works or not. Thanks
@DianaIonita We just tried with above solution, what we found is after we applied below configure to the main API GW, the cache has been disabled for the whole API GW(stage->settings->cache settings). After we enabled it in our service for the endpoint, we checked both in Postman for the request duration and checked the metrics in Cloudwatch, either shows there is a cache for the endpoint.
Our main API GW configuration:
plugins:
- serverless-api-gateway-caching
custom:
apiGatewayCaching:
restApiId: ${cf:core-layer-${self:provider.stage}.apiGatewayRestApiId}
enabled: false # all endpoints by default have caching disabled
Our service endpoint configuration:
- http:
private: true
path: ${self:custom.api.urls.accountsGet}
method: get
cors: true
request:
parameters:
paths:
customerId: true
caching:
enabled: true
cacheKeyParameters:
- name: request.path.customerId
...
plugins:
- serverless-api-gateway-caching
custom:
apiGatewayCaching:
enabled: true
apiGatewayIsShared: true
Hi @GaaraZhu,
I reviewed the setup that I'm currently using and the main API Gateway settings are as follows:
custom:
apiGatewayCaching:
enabled: true # create an API cache
ttlInSeconds: 0 # but don't cache responses by default (individual endpoints can override this)
This isn't particularly easy to figure out, I also made the mistake of trying to do so without an example. I'll add this to the plugin readme.
Thanks @DianaIonita, that's very tricky workaround. I think the whole reason that we can only do it in this way is because AWS GW doesn't support enabling caching only for a single resource. It has to be enabled for the whole gateway. Here comes a question: by settingttlInSeconds to 0, will API GW still perform the cache write and delete for all resources? As normally the write and delete operation for such cache is happening asynchronously, and the cache writer normally doesn't care about the value of ttl.
Here comes a question: by setting
ttlInSecondsto 0, will API GW still perform the cache write and delete for all resources?
Hi @GaaraZhu,
That's a really good question. My guess is yes, an entry would be written to the cache, but I don't think it would live for long. I'd say there's probably something looking at the cache periodically and removing expired entries. Or at least I'd hope so, otherwise the cache cluster would very quickly run out of space. I'm not sure whether there's a way to actually test this though.
Right @DianaIonita it also means unnecessary overhead in our case since our purpose is just to turn the cache off. Maybe it's worthwhile to confirm with AWS Support to see whether this workaround is ok or not? Other than that, this plugin works perfected in cases where we have cache enabled for the whole GW.
it also means unnecessary overhead in our case since our purpose is just to turn the cache off
True, though it's uncertain that it would impact the endpoint. It's a good point though, I'll try asking AWS support what their thoughts are on the matter. Thanks for raising this!
Hi @DianaIonita,
Thanks for the plugin. Is there any update on the following?
Right @DianaIonita it also means unnecessary overhead in our case since our purpose is just to turn the cache off. Maybe it's worthwhile to confirm with AWS Support to see whether this workaround is ok or not? Other than that, this plugin works perfected in cases where we have cache enabled for the whole GW.
Hi @deepaktammali,
Yes, AWS support replied and said "having the TTL set to 0 means no caching will be done, thus no cache writing or deleting for all resources." They also suggested having different stages, one with caching enabled by default and one without, and endpoints deployed in each based on that requirement. Of course, it's up to you if it works for your use case.
I'll close this issue as resolved, but feel free to continue the conversation if you wish.