gloo icon indicating copy to clipboard operation
gloo copied to clipboard

Security Alert: 1.16.25

Open soloio-bot opened this issue 5 months ago • 3 comments

quay.io/solo-io/access-logger:1.16.25

No Vulnerabilities Found for quay.io/solo-io/access-logger:1.16.25 (alpine 3.21.3)

Vulnerabilities Listed for usr/local/bin/access-logger

Vulnerability ID Package Severity Installed Version Fixed Version Reference
CVE-2025-22874 stdlib HIGH v1.24.1 1.23.10, 1.24.4 https://avd.aquasec.com/nvd/cve-2025-22874

quay.io/solo-io/certgen:1.16.25

No Vulnerabilities Found for quay.io/solo-io/certgen:1.16.25 (alpine 3.21.3)

Vulnerabilities Listed for usr/local/bin/certgen

Vulnerability ID Package Severity Installed Version Fixed Version Reference
CVE-2025-22874 stdlib HIGH v1.24.1 1.23.10, 1.24.4 https://avd.aquasec.com/nvd/cve-2025-22874

quay.io/solo-io/discovery:1.16.25

No Vulnerabilities Found for quay.io/solo-io/discovery:1.16.25 (alpine 3.21.3)

Vulnerabilities Listed for usr/local/bin/discovery

Vulnerability ID Package Severity Installed Version Fixed Version Reference
CVE-2025-22874 stdlib HIGH v1.24.1 1.23.10, 1.24.4 https://avd.aquasec.com/nvd/cve-2025-22874

quay.io/solo-io/gloo:1.16.25

No Vulnerabilities Found for quay.io/solo-io/gloo:1.16.25 (ubuntu 20.04)

Vulnerabilities Listed for usr/local/bin/gloo

Vulnerability ID Package Severity Installed Version Fixed Version Reference
CVE-2025-22874 stdlib HIGH v1.24.1 1.23.10, 1.24.4 https://avd.aquasec.com/nvd/cve-2025-22874

quay.io/solo-io/gloo-envoy-wrapper:1.16.25

No Vulnerabilities Found for quay.io/solo-io/gloo-envoy-wrapper:1.16.25 (ubuntu 20.04)

Vulnerabilities Listed for usr/local/bin/envoyinit

Vulnerability ID Package Severity Installed Version Fixed Version Reference
CVE-2025-22874 stdlib HIGH v1.24.1 1.23.10, 1.24.4 https://avd.aquasec.com/nvd/cve-2025-22874

quay.io/solo-io/ingress:1.16.25

No Vulnerabilities Found for quay.io/solo-io/ingress:1.16.25 (alpine 3.21.3)

Vulnerabilities Listed for usr/local/bin/ingress

Vulnerability ID Package Severity Installed Version Fixed Version Reference
CVE-2025-22874 stdlib HIGH v1.24.1 1.23.10, 1.24.4 https://avd.aquasec.com/nvd/cve-2025-22874

quay.io/solo-io/kubectl:1.16.25

No Vulnerabilities Found for quay.io/solo-io/kubectl:1.16.25 (alpine 3.21.3)

Vulnerabilities Listed for usr/local/bin/kubectl

Vulnerability ID Package Severity Installed Version Fixed Version Reference
CVE-2025-22874 stdlib HIGH v1.21.11 1.23.10, 1.24.4 https://avd.aquasec.com/nvd/cve-2025-22874

quay.io/solo-io/sds:1.16.25

No Vulnerabilities Found for quay.io/solo-io/sds:1.16.25 (alpine 3.21.3)

Vulnerabilities Listed for usr/local/bin/sds

Vulnerability ID Package Severity Installed Version Fixed Version Reference
CVE-2025-22874 stdlib HIGH v1.24.1 1.23.10, 1.24.4 https://avd.aquasec.com/nvd/cve-2025-22874

soloio-bot avatar Jun 16 '25 08:06 soloio-bot

Handling the stdlib CVE is a trickier one on LTS branches.

Problem We have pinned our libraries to v0.28.x: https://github.com/solo-io/gloo/blob/v1.16.x/go.mod#L341. And we depend on kubectl 1.28 as a base image to our kubectl job: https://github.com/solo-io/gloo/blob/v1.16.x/jobs/kubectl/Dockerfile#L3. This version is EOL and outside of the Kubernetes support window (https://kubernetes.io/releases/).

The version of go that is being used in that base image contains the vulnerability in question, though given its usage in our product, we are not at risk.

Solution Options We have the following options to consider:

  1. Update the version of Kubernetes to ensure we are running a supported version. This is an example that running a supported version makes our life a lot easier. On the fliip side, the support standard of 1 year for k8s releases will mirror our support policy, if we release at the same cadence.
  2. Update the kubectl version to ensure we are running a supported version. Per https://kubernetes.io/releases/version-skew-policy/ there is some skew that is permitted. Though in our case, kubectl can be one minor version ahead of the api-server. This would mean that we could jump to 1.29, which unfortunately is still not in support.
  3. Ignore the CVE in our trivyignore. This would prevent the CVE, which does not affect us, from being published in our docs. On the flip side, it would also not report it in all of our LTS branches, and for some of those, we may want to update.
  4. Add support for a per-lts branch trivyignore. In cases like this, we may be more ok ignoring a CVE, only for a given branch. We could explore support of maintain a "global" ignore file that all branches use, and a "local" one that only a given branch will use (and the two are ORed together).

sam-heilbron avatar Jun 18 '25 14:06 sam-heilbron

  1. Could this be considered a breaking change, like if the k8s cluster was already a version behind and we bump up the application version past the allowed skew?
  2. Don't have any examples, but I think we've used this approach in the past. The problem as noted here is that you can only go one bump ahead, and it doesn't always get your far enough
  3. This seems to be our general approach, though with the drawback noted of that the suppression applies to all branches, which might too broad.
  4. This issue (closed due to inactivity) was created for this. I think this would be the ideal approach and more in-line with our general approach of "each LTS branch knows how to test itself" if we are willing to put in the effort.

sheidkamp avatar Jun 18 '25 14:06 sheidkamp

Personally, I would avoid 3. I don't think it's a good idea to add risk to more recent LTS branches just to support our oldest LTS branch that will drop off support in just a few months.

I'm not familiar enough with the support policies of Kubernetes, but if either of those are breaking changes, then from a user standpoint, I can't see why I wouldn't just update to 1.17 or later instead.

So by process of elimination, that leaves option 4 as the best in my opinion, but I still don't like it that much. I'm not a big fan of complicating our CVE scanning process further, unless this becomes a recurring theme in the future.

The last possible option would be for us to do nothing on 1.16.x and just remember to ignore it when it gets reported. We would have to bear with this until 1.16.x drops off our support window (which would be 2-3 months from now). 2+ months feels like a suboptimal time to wait for this issue to go away, but I can't think of a better option except for option number 4 mentioned above.

ashishb-90 avatar Jun 18 '25 15:06 ashishb-90