Ocelot icon indicating copy to clipboard operation
Ocelot copied to clipboard

Caching based on response status

Open nurhat opened this issue 6 years ago β€’ 20 comments
trafficstars

Hi, An endpoint with caching enabled caches output even if downstream request returns anything other than httpstatus code with 200. Is it by design, or a bug?

nurhat avatar Jan 09 '19 14:01 nurhat

@nurhat is this happening with 4xx and 5xx requests? Success isn't 200 only.

If so, it sounds like a bug to me -- @TomPallister is there something I'm missing as to why non-success would still be cached?

philproctor avatar Jan 09 '19 15:01 philproctor

@philproctor yes, if you check OutputCacheMiddleware.cs line 55, it does not check http status code. Caching only disabled when http request fails with exception.

nurhat avatar Jan 09 '19 15:01 nurhat

Got it -- I'll keep this open as a bug.

Update by Maintainer on Feb 12, 2025

This is not a bug! This is feature proposal.

philproctor avatar Jan 09 '19 16:01 philproctor

I think this was intentional and i wouldn't call it a bug. Why, for example, wouldn't you cache a 404 response if the api behind is telling you that the resource doesn't exist?

Update by Maintainer on Feb 12, 2025

Marcelo, you are absolutely right: this is not a bug. This is new functionality people wanna have in Ocelot: this issue is a feature request.

margaale avatar Jan 09 '19 17:01 margaale

404 is a response type that might make sense to cache, but there's also a lot of other 4xx and 5xx responses that don't make sense to cache. Pretty much anything in the 5xx series shouldn't be cached, and some 4xx responses may be made due to current state of the services or similar.

Perhaps if response is not .IsSuccessStatusCode(), we should check against a configured whitelist of cacheable responses.

philproctor avatar Jan 09 '19 19:01 philproctor

Yep, it was just an example. You mean configured by the user?

margaale avatar Jan 09 '19 19:01 margaale

That's my thought, as part of the cache config.

philproctor avatar Jan 09 '19 19:01 philproctor

mmmm, i don't like that. What if someone wants to disable one code for a downstream but not for another?

margaale avatar Jan 09 '19 19:01 margaale

I was thinking about the ReRoute cache config, not the global. Maybe something like:

          "FileCacheOptions": {
              "TtlSeconds": 0,
              "Region": "",
              "StatusCodes": [400, 404, 405, 410]
          },

philproctor avatar Jan 09 '19 21:01 philproctor

Ohhhhhh that's better, but I think it's going to be kinda painful to declare all the routes explicitly. Maybe some global whitelist configuration with exceptions per route would be easier

margaale avatar Jan 09 '19 22:01 margaale

I agree on the global + per route

philproctor avatar Jan 11 '19 17:01 philproctor

@philproctor , any progress for the issue? thx

nurhat avatar Apr 01 '19 08:04 nurhat

We're still having this problem with 500 responses being cached. Any chance a config was added and this issue just got buried?

apoteet avatar Sep 01 '21 19:09 apoteet

Hi Team

Any plan to fix this issue as we are also facing this issue where 500 or even 401 response are being cached!!!.

Thanks, Suraj

sberwal avatar Feb 11 '25 09:02 sberwal

@sberwal on Feb 11

Suraj, multiple exclamation marks are unnecessary. Contributors and you are welcome to open a pull request. If it meets the required quality standards of dev process, I will include it in the current release. Are you willing to contribute?

P.S. If you are experiencing issues, I recommend switching off the cache entirely. This is elegant solution, for sure πŸ˜‰

P.P.S. As a workaround, you could decrease TtlSeconds to 1-5 seconds to allow for the auto-removal of "bad" responses and temporarily apply a Retry pattern on the client's side with a predicate based on "bad" statuses. Please note, the sum of all retry periods must be greater than TtlSeconds.

raman-m avatar Feb 11 '25 13:02 raman-m

@margaale self-assigned this on Jan 10, 2019

Dear Marcelo, You assigned this issue to yourself six years ago. I wonder if you haven't a ready solution. How did you solve the problem?

raman-m avatar Feb 11 '25 13:02 raman-m

Hi, sorry but no. Actually, I'm not even a part of the team anymore...

margaale avatar Feb 11 '25 13:02 margaale

@margaale on Feb 11

Understood! I unassigned you. Best of luck with your new team.

P.S. You may to share our repository with your current team lead.

raman-m avatar Feb 11 '25 13:02 raman-m

Raman

This issue is actually impacting the basic cache functionality. And workaround of reducing TTL does not help in any way. It’s not even workaround because cache requirement is generally for longer periods than few seconds to ensure that backend systems are not hit for every request and we can send quick response.

If we are going to keep cache for 1-5 seconds than we may actually not have it at all which anyways defeats the purpose of having a cache functionality.

And the number of excalamation might be offending to you but bugs like these not being addressed are hurting the community more especially when we are in production and struggling to get basic features work properly.

I will let you and other team members decide whether you should consider raising an internal PR on this or wait for community to come back for a bug identified 6 years back.

Thanks, Suraj

@sberwal on Feb 11

Suraj, multiple exclamation marks are unnecessary. Contributors and you are welcome to open a pull request. If it meets the required quality standards of dev process, I will include it in the current release. Are you willing to contribute?

P.S. If you are experiencing issues, I recommend switching off the cache entirely. This is elegant solution, for sure πŸ˜‰

P.P.S. As a workaround, you could decrease TtlSeconds to 1-5 seconds to allow for the auto-removal of "bad" responses and temporarily apply a Retry pattern on the client's side with a predicate based on "bad" statuses. Please note, the sum of all retry periods must be greater than TtlSeconds.

sberwal avatar Feb 12 '25 05:02 sberwal

@sberwal Bla-bla-bla!

I will let you and other team members decide whether you should consider raising an internal PR on this or wait for community to come back for a bug identified 6 years back.

You have been provided with explanations. There is no bug in Ocelot. Any downstream service behavior is outside the responsibility of the Ocelot team. This is how your consumed service behaves. This is not a bug; this is a feature proposal!

You and other team members should provide a stabilization solution for the consumed downstream service. Moreover, I proposed a solution, but you are only providing unproductive responses. Please ask your team to work on this matter. The Ocelot team will not work on behalf of your developers; your team must develop a solution.

Ocelot cache saves any response and any status because this is the behavior of the downstream service. This is not a bug! Phil put the wrong label on this issue.

This feature request will be developed based on Ocelot team capacity or we will accept community contribution.

And the number of exclamation might be offending to you but bugs like these not being addressed are hurting the community

This is not your business to speak on behalf of other people!

...more especially when we are in production and struggling to get basic features work properly.

You and your team are unprofessional. I wonder how your team delivered something into a production.

raman-m avatar Feb 12 '25 09:02 raman-m