Ocelot
Ocelot copied to clipboard
Caching based on response status
Hi, An endpoint with caching enabled caches output even if downstream request returns anything other than httpstatus code with 200. Is it by design, or a bug?
@nurhat is this happening with 4xx and 5xx requests? Success isn't 200 only.
If so, it sounds like a bug to me -- @TomPallister is there something I'm missing as to why non-success would still be cached?
@philproctor yes, if you check OutputCacheMiddleware.cs line 55, it does not check http status code. Caching only disabled when http request fails with exception.
Got it -- I'll keep this open as a bug.
Update by Maintainer on Feb 12, 2025
This is not a bug! This is feature proposal.
I think this was intentional and i wouldn't call it a bug. Why, for example, wouldn't you cache a 404 response if the api behind is telling you that the resource doesn't exist?
Update by Maintainer on Feb 12, 2025
Marcelo, you are absolutely right: this is not a bug. This is new functionality people wanna have in Ocelot: this issue is a feature request.
404 is a response type that might make sense to cache, but there's also a lot of other 4xx and 5xx responses that don't make sense to cache. Pretty much anything in the 5xx series shouldn't be cached, and some 4xx responses may be made due to current state of the services or similar.
Perhaps if response is not .IsSuccessStatusCode(), we should check against a configured whitelist of cacheable responses.
Yep, it was just an example. You mean configured by the user?
That's my thought, as part of the cache config.
mmmm, i don't like that. What if someone wants to disable one code for a downstream but not for another?
I was thinking about the ReRoute cache config, not the global. Maybe something like:
"FileCacheOptions": {
"TtlSeconds": 0,
"Region": "",
"StatusCodes": [400, 404, 405, 410]
},
Ohhhhhh that's better, but I think it's going to be kinda painful to declare all the routes explicitly. Maybe some global whitelist configuration with exceptions per route would be easier
I agree on the global + per route
@philproctor , any progress for the issue? thx
We're still having this problem with 500 responses being cached. Any chance a config was added and this issue just got buried?
Hi Team
Any plan to fix this issue as we are also facing this issue where 500 or even 401 response are being cached!!!.
Thanks, Suraj
@sberwal on Feb 11
Suraj, multiple exclamation marks are unnecessary. Contributors and you are welcome to open a pull request. If it meets the required quality standards of dev process, I will include it in the current release. Are you willing to contribute?
P.S. If you are experiencing issues, I recommend switching off the cache entirely. This is elegant solution, for sure π
P.P.S. As a workaround, you could decrease TtlSeconds to 1-5 seconds to allow for the auto-removal of "bad" responses and temporarily apply a Retry pattern on the client's side with a predicate based on "bad" statuses. Please note, the sum of all retry periods must be greater than TtlSeconds.
@margaale self-assigned this on Jan 10, 2019
Dear Marcelo, You assigned this issue to yourself six years ago. I wonder if you haven't a ready solution. How did you solve the problem?
Hi, sorry but no. Actually, I'm not even a part of the team anymore...
@margaale on Feb 11
Understood! I unassigned you. Best of luck with your new team.
P.S. You may to share our repository with your current team lead.
Raman
This issue is actually impacting the basic cache functionality. And workaround of reducing TTL does not help in any way. Itβs not even workaround because cache requirement is generally for longer periods than few seconds to ensure that backend systems are not hit for every request and we can send quick response.
If we are going to keep cache for 1-5 seconds than we may actually not have it at all which anyways defeats the purpose of having a cache functionality.
And the number of excalamation might be offending to you but bugs like these not being addressed are hurting the community more especially when we are in production and struggling to get basic features work properly.
I will let you and other team members decide whether you should consider raising an internal PR on this or wait for community to come back for a bug identified 6 years back.
Thanks, Suraj
Suraj, multiple exclamation marks are unnecessary. Contributors and you are welcome to open a pull request. If it meets the required quality standards of dev process, I will include it in the current release. Are you willing to contribute?
P.S. If you are experiencing issues, I recommend switching off the cache entirely. This is elegant solution, for sure π
P.P.S. As a workaround, you could decrease
TtlSecondsto 1-5 seconds to allow for the auto-removal of "bad" responses and temporarily apply a Retry pattern on the client's side with a predicate based on "bad" statuses. Please note, the sum of all retry periods must be greater thanTtlSeconds.
@sberwal Bla-bla-bla!
I will let you and other team members decide whether you should consider raising an internal PR on this or wait for community to come back for a bug identified 6 years back.
You have been provided with explanations. There is no bug in Ocelot. Any downstream service behavior is outside the responsibility of the Ocelot team. This is how your consumed service behaves. This is not a bug; this is a feature proposal!
You and other team members should provide a stabilization solution for the consumed downstream service. Moreover, I proposed a solution, but you are only providing unproductive responses. Please ask your team to work on this matter. The Ocelot team will not work on behalf of your developers; your team must develop a solution.
Ocelot cache saves any response and any status because this is the behavior of the downstream service. This is not a bug! Phil put the wrong label on this issue.
This feature request will be developed based on Ocelot team capacity or we will accept community contribution.
And the number of exclamation might be offending to you but bugs like these not being addressed are hurting the community
This is not your business to speak on behalf of other people!
...more especially when we are in production and struggling to get basic features work properly.
You and your team are unprofessional. I wonder how your team delivered something into a production.