Karl Stoney
Karl Stoney
I _think_ the key under meta is now simply `payload` rather than the issuer, as it was in 1.20.4? I can't see anything related to this in the release notes?
https://istio.io/latest/news/releases/1.21.x/announcing-1.21/change-notes/#security talks about some changes to the dynamic metadata key > Improved request JWT authentication to use the upstream Envoy JWT filter instead of the custom Istio Proxy filter. Because...
I can work around this by doing: ``` local meta = request_handle:streamInfo():dynamicMetadata():get("envoy.filters.http.jwt_authn") local claimsOld = meta["https://app.platform-identity-provider.testing.k8.atcloud.io"] local claimsNew = meta["payload"] local claims = claimsNew or claimsOld ``` But would be...
@kyessenov I'm not sure I understand you, we have a perfectly working envoyfilter on 1.20; which has the claims under `request_handle:streamInfo():dynamicMetadata():get("envoy.filters.http.jwt_authn")["https://app.platform-identity-provider.testing.k8.atcloud.io"]`; which no longer works, but `request_handle:streamInfo():dynamicMetadata():get("envoy.filters.http.jwt_authn")["payload"]` does.
> Now the key is always "payload" since it simplifies the configuration. Agree, it's simpler. > but we usually reserve the right to do small changes like this That's why...
Been doing a bit more testing: Via proxy (SSL), IAM enabled: ``` 1x connection: {"connect":181,"query":211,"durationMs":216} 50x connections: {"connect":793.9,"query":825.68,"durationMs":828} ``` Via proxy (SSL), username/password: ``` 1x: {"connect":152,"query":183,"durationMs":188} 50x: {"connect":406.48,"query":436.56,"durationMs":437.86} ``` No...
Thought i'd try against pg14 local docker instances: ``` 1x: {"connect":30,"query":32,"durationMs":37} 50x: {"connect":88.18,"query":91.74,"durationMs":99.04} 100x: {"connect":115.23,"query":121.95,"durationMs":132.03} ``` So now i'm questioning my test script... going to try with a different pg...
Hey @enocom thanks for the response. Pretty sure the latency we're observing now is _not_ the proxy anyway. Even testing against a local postgres instance with no proxy i observe...
Just sharing this here as i found it interesting https://twitter.com/BdKozlovski/status/1684098236426878976?t=SWYsfn24ltvFSyEOKHjjEQ&s=19 Cloudflare use Postgres at scale and point out how expensive connections are and how they use https://www.pgbouncer.org to mitigate that....
:wave: have been giving this plugin a go today so thought i'd share my UX feedback. Caveat: fully appreciate it's probably just a product of the way we use redis....