The GRPC server in opm produces random channel order for FBC catalogs
We have a large catalog. We just converted it from sqlite to fbc:
FBC:
icr.io/cpopen/ibm-operator-catalog:latest
SQLITE:
icr.io/cpopen/ibm-operator-catalog:sqlite-latest
It looks like the order in which the channels are being returned via the GetPackage GRPC API in the FBC catalog is non-deterministic, whereas they appeared either in lexicographical ascending order or registry add <bundle> order (not sure which). This results in various clients to display the channels in a random order, whereas they were being displayed in a seemingly-sorted order before (e.g. the OpenShift Console).
To reproduce:
$ podman run -d --name catalog -p 50051:50051 "icr.io/cpopen/ibm-operator-catalog:latest"
$ grpcurl -plaintext -d '{"name":"ibm-mq"}' localhost:50051 api.Registry/GetPackage | jq -r .channels[].csvName
ibm-mq.v2.0.8
ibm-mq.v2.1.0
ibm-mq.v1.2.0
ibm-mq.v1.6.0
ibm-mq.v1.7.0
ibm-mq.v1.8.2
ibm-mq.v1.5.0
ibm-mq.v2.2.2
ibm-mq.v2.3.0
ibm-mq.v1.0.0
ibm-mq.v1.1.0
ibm-mq.v1.3.8
ibm-mq.v1.4.0
$ grpcurl -plaintext -d '{"name":"ibm-mq"}' localhost:50051 api.Registry/GetPackage | jq -r .channels[].csvName
ibm-mq.v1.0.0
ibm-mq.v1.1.0
ibm-mq.v1.3.8
ibm-mq.v1.4.0
ibm-mq.v1.5.0
ibm-mq.v2.2.2
ibm-mq.v2.3.0
ibm-mq.v1.2.0
ibm-mq.v1.6.0
ibm-mq.v1.7.0
ibm-mq.v1.8.2
ibm-mq.v2.0.8
ibm-mq.v2.1.0
vs the sqlite version:
$ podman run -d --name catalog -p 50051:50051 "icr.io/cpopen/ibm-operator-catalog:sqlite-latest"
$ grpcurl -plaintext -d '{"name":"ibm-mq"}' localhost:50051 api.Registry/GetPackage | jq -r .channels[].csvName
ibm-mq.v1.0.0
ibm-mq.v1.1.0
ibm-mq.v1.2.0
ibm-mq.v1.3.8
ibm-mq.v1.4.0
ibm-mq.v1.5.0
ibm-mq.v1.6.0
ibm-mq.v1.7.0
ibm-mq.v1.8.2
ibm-mq.v2.0.7
ibm-mq.v2.1.0
ibm-mq.v2.2.2
$ grpcurl -plaintext -d '{"name":"ibm-mq"}' localhost:50051 api.Registry/GetPackage | jq -r .channels[].csvName
ibm-mq.v1.0.0
ibm-mq.v1.1.0
ibm-mq.v1.2.0
ibm-mq.v1.3.8
ibm-mq.v1.4.0
ibm-mq.v1.5.0
ibm-mq.v1.6.0
ibm-mq.v1.7.0
ibm-mq.v1.8.2
ibm-mq.v2.0.7
ibm-mq.v2.1.0
ibm-mq.v2.2.2
Thanks @cdjohnson! We'll take a closer look a this shortly.
Some context in a slack thread is here: https://kubernetes.slack.com/archives/C0181L6JYQ2/p1677097814470179?thread_ts=1676998367.355039&cid=C0181L6JYQ2
Issues go stale after 90 days of inactivity. If there is no further activity, the issue will be closed in another 30 days.
This issue has been closed due to inactivity.