[enterprise-4.9] Issue in file support/gathering-cluster-data.adoc
Which section(s) is the issue in?
Table 1
What needs fixing?
The image name for the OCS must-gather 4.9 is wrong and fails to work because it's not available.
In fact, there is no 4.9 nor a 4.10 image. There is, however a v4.8 image. Which, very likely needs no changes to work.
Do we need to symlink a 4.9 name to the existing 4.8 image? Or is a new image actually need to be created.
Same for 4.10.
These are the errors that must-gather reports:
...
[must-gather-k9s92] OUT gather did not start: unable to pull image: ImagePullBackOff: Back-off pulling image "registry.redhat.io/ocs4/ocs-must-gather-rhel8:v4.9"
...
<snip lots and lots of output till the very end...>
Wrote inspect data to must-gather.local.4489320396108716211.
error running backup collection: errors ocurred while gathering data:
<... snip -- other miscellaneous errors...>
error: gather did not start for pod must-gather-k9s92: unable to pull image: ImagePullBackOff: Back-off pulling image "registry.redhat.io/ocs4/ocs-must-gather-rhel8:v4.9"
Found that the v4.8 image does work. In fact, it appears the best option is to omit the version value from the image string so that it chooses the latest version. That would allow it to work correctly for must-gather on 4.8, 4.9 and 4.10.
registry.redhat.io/ocs4/ocs-must-gather-rhel8
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
The image has since been changed to registry.redhat.io/odf4/ocs-must-gather-rhel8:v4.9 due to the renaming to OpenShift Data Foundation. @mistergareth Can you let us know whether that works for you now?
https://docs.openshift.com/container-platform/4.9/support/gathering-cluster-data.html#gathering-data-specific-features_gathering-cluster-data
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.