Upgrade from Kong 3.8.0 to 3.9.0 rate-limiting plugin depreciation warning
Is there an existing issue for this?
- [x] I have searched the existing issues
Kong version ($ kong version)
3.9.0
Current Behavior
When I upgraded from Kong 3.8.0 to 3.9.0 I got millions of log warning messages about the rate-limiting plugin having deprecated config.redis*
Example: 1407#0: *50235 [kong] init.lua:904 rate-limiting: config.redis_ssl is deprecated
Expected Behavior
When I upgrade I expect config that has moved location to be auto-updated
Steps To Reproduce
- Kubernetes 1.31.x
- Running Kong 3.8.0 in DB-less mode deployed with Helm using chart version 2.46.0
- Have an ingress using rate-limiting plugin on the route
- Upgrade chart to 2.48.0 making sure image is 3.9.0
- See warning
Anything else?
Kong is in DB-Less mode Warning comes from kong_admin service The plugins and ingress routes are deploy in additional helm charts in different namespaces to kong
This field will be automatically replaced with redis.ssl in version 3.x. If you are using DB-less mode, you can modify the Admin API request parameters when applying the configuration, changing redis_ssl to redis.ssl.
@Water-Melon I'm not using any redis in my config nor has it ever been configured yet I still get the depreciation warning
More info here is the full deployed config of the rate-limit plugin that was deployed with Kong 3.8.0
"config": {
"redis_host": null,
"policy": "local",
"fault_tolerant": true,
"hide_client_headers": false,
"minute": null,
"redis_timeout": 2000,
"redis_server_name": null,
"redis_ssl_verify": false,
"redis": {
"timeout": 2000,
"password": null,
"server_name": null,
"host": null,
"ssl": false,
"username": null,
"port": 6379,
"database": 0,
"ssl_verify": false
},
"error_code": 429,
"error_message": "API rate limit exceeded",
"sync_rate": -1,
"redis_ssl": false,
"redis_password": null,
"redis_port": 6379,
"redis_username": null,
"second": 10,
"header_name": null,
"hour": null,
"day": null,
"month": null,
"year": null,
"path": "/",
"limit_by": "ip",
"redis_database": 0
},
Here is the config I'm defining in helm
config:
path: /
second: 10
limit_by: ip
hide_client_headers: false
My understanding is that the redis_* config is being deployed with default values. I understand this is done for compatibility before the 4.0 release, but the over-logging of deprecation warnings in the Kong proxy logs is too much.
This issue is marked as stale because it has been open for 14 days with no activity.
@Water-Melon any further comment on this?
I'm also encountering this issue. Like the OP, I am not using redis anywhere and my policy is "local" for the rate-limiting, so I don't understand why this error is appearing.
edit: I switched to kong/kong-gateway:3.10.0.2-rhel and these errors do not appear so it seems specific to the ubuntu container to me.
I've done some more testing, I still get the warning messages even with a fresh install of Kong 3.9.1 on a new Kubernetes cluster (so no older versions of Kong) The config I deploy using helm for the rate-limiting plugin remained the same only specifying as I mentioned before, so no redis config specified!
config:
path: /
second: 10
limit_by: ip
hide_client_headers: false
I can see in the API console that it has configured the deprecated redis config by default.
Downgrading this fresh install of Kong to 3.8.1 removes the warnings
This issue is marked as stale because it has been open for 14 days with no activity.
I'm receiving thousands of logs that I'm using the old config, but I'm using the new config! I've never used redis_host only config.redis.host ... I think there's a bug in the deprecation check that it is actually complaining about the inverse.
This issue is marked as stale because it has been open for 14 days with no activity.
Still an issue.
This issue is marked as stale because it has been open for 14 days with no activity.
Dear contributor,
We are automatically closing this issue because it has not seen any activity for three weeks. We're sorry that your issue could not be resolved. If any new information comes up that could help resolving it, please feel free to reopen it.
Your contribution is greatly appreciated!
Please have a look our pledge to the community for more information.
Sincerely, Your Kong Gateway team
Still an issue