misunderstanding of the documentation (IndexField)
Hello,
I made a operator to create Custom Resources and now I'm trying to create a CLI to request them with controller-runtime. Every thing is fine for my operator but when I try to list resources with a matching field with my cli I get an error.
I tried this example from the doc:
// Past from : https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/client?utm_source=gopls#example-Client-List
// Dummy main.go
package main
import (
"context"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
var (
c client.Client
someIndexer client.FieldIndexer
)
func main() {
// someIndexer is a FieldIndexer over a Cache
_ = someIndexer.IndexField(context.TODO(), &corev1.Pod{}, "spec.volumes.secret.secretName", func(o client.Object) []string {
var res []string
for _, vol := range o.(*corev1.Pod).Spec.Volumes {
if vol.Secret == nil {
continue
}
// just return the raw field value -- the indexer will take care of dealing with namespaces for us
res = append(res, vol.Secret.SecretName)
}
return res
})
// elsewhere (e.g. in your reconciler)
mySecretName := "someSecret" // derived from the reconcile.Request, for instance
var podsWithSecrets corev1.PodList
_ = c.List(context.Background(), &podsWithSecrets, client.MatchingFields{"spec.volumes.secret.secretName": mySecretName})
}
I get this error when I run this code
$> go run ./...
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1cf2b3e]
goroutine 1 [running]:
main.main()
/${PATH TO MY MODULE}/main.go:17 +0x3e
exit status 2
I'm new to controller-runtime and I'm pretty sure that I've missed something, could someone help me please. Thanks
How are you initializing the someIndexer and c in your code base ?
check https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/client#example-New for how to setup a client.
https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/client/example_test.go This has the full example. You can refer to this one
Hi @harshanarayana
Thanks for your message, I already read the example_test.go example but got the same error
But I think that I've missed something because the client (https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/client/example_test.go#L36) used in the function ExampleFieldIndexer_secretName() is not the same as the one init in the ExampleNew() function (https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/client/example_test.go#L41). So I updated the c client.Client into cl client.Client.
package main
import (
"context"
"fmt"
"os"
corev1 "k8s.io/api/core/v1"
_ "k8s.io/client-go/plugin/pkg/client/auth"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/config"
)
var (
cl client.Client
someIndexer client.FieldIndexer
)
func ExampleNew() {
cl, err := client.New(config.GetConfigOrDie(), client.Options{})
if err != nil {
fmt.Println("failed to create client")
os.Exit(1)
}
podList := &corev1.PodList{}
err = cl.List(context.Background(), podList, client.InNamespace("01g3v209a2s2yp37p1abw36kw0"))
if err != nil {
fmt.Printf("failed to list pods in namespace default: %v\n", err)
os.Exit(1)
}
fmt.Println("---------")
}
// This example shows how to set up and consume a field selector over a pod's volumes' secretName field.
func ExampleFieldIndexer_secretName() {
// someIndexer is a FieldIndexer over a Cache
_ = someIndexer.IndexField(context.TODO(), &corev1.Pod{}, "spec.volumes.secret.secretName", func(o client.Object) []string {
var res []string
for _, vol := range o.(*corev1.Pod).Spec.Volumes {
if vol.Secret == nil {
continue
}
// just return the raw field value -- the indexer will take care of dealing with namespaces for us
res = append(res, vol.Secret.SecretName)
}
return res
})
// elsewhere (e.g. in your reconciler)
mySecretName := "someSecret" // derived from the reconcile.Request, for instance
var podsWithSecrets corev1.PodList
_ = cl.List(context.Background(), &podsWithSecrets, client.MatchingFields{"spec.volumes.secret.secretName": mySecretName})
}
func main() {
ExampleNew()
ExampleFieldIndexer_secretName()
}
output:
$> go run ./...
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x1dfaabe]
goroutine 1 [running]:
main.ExampleFieldIndexer_secretName()
/${PATH TO MY MODULE}/main.go:39 +0x3e
main.main()
/${PATH TO MY MODULE}/main.go:59 +0x1c
exit status 2
@quentinalbertone On careful second check, you are right. The code int the example_test.go doesn't really work as the someInformer is never really initiazed. And hence the error you are running info. Sorry I missed the non initialized someInformer bit earlier.
xref: https://github.com/kubernetes-sigs/kubebuilder/blob/cf93a2787a70b5b047dac52a6fa6e818d0512988/docs/book/src/reference/watching-resources/testdata/external-indexed-field/controller.go#L144
The kubebuilder one has a full example of how to use the FieldIndexer
Yes I know how to do it in a operator with a manager. Maybe I'm wrong but a manager seems a little too much just to list kubernetes object in a cli.
If the go-doc and the example_test.go are not useful or explicit enough maybe we can ask to delete them or add more lines to describe how to use them ?
I am with you on this. The example file needs fixing. And yes, the manager way is a bit of an overkill.
package main
import (
"context"
"fmt"
"os"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/client/config"
)
var (
cl client.Client
someIndexer client.FieldIndexer
)
func init() {
var err error
cl, err = client.New(config.GetConfigOrDie(), client.Options{})
if err != nil {
fmt.Println("failed to create client")
os.Exit(1)
}
someIndexer, err = cache.MultiNamespacedCacheBuilder([]string{"kube-system"})(config.GetConfigOrDie(), cache.Options{Scheme: cl.Scheme()})
if err != nil {
fmt.Println("failed to create informer")
os.Exit(1)
}
}
func ExampleNew() {
var err error
podList := &corev1.PodList{}
err = cl.List(context.Background(), podList, client.InNamespace("kube-system"))
if err != nil {
fmt.Printf("failed to list pods in namespace default: %v\n", err)
os.Exit(1)
}
fmt.Println("---------")
}
// This example shows how to set up and consume a field selector over a pod's volumes' secretName field.
func ExampleFieldIndexer_secretName() {
// someIndexer is a FieldIndexer over a Cache
_ = someIndexer.IndexField(context.TODO(), &corev1.Pod{}, "spec.volumes.secret.secretName", func(o client.Object) []string {
var res []string
for _, vol := range o.(*corev1.Pod).Spec.Volumes {
if vol.Secret == nil {
continue
}
// just return the raw field value -- the indexer will take care of dealing with namespaces for us
res = append(res, vol.Secret.SecretName)
}
return res
})
// elsewhere (e.g. in your reconciler)
mySecretName := "someSecret" // derived from the reconcile.Request, for instance
var podsWithSecrets corev1.PodList
_ = cl.List(context.Background(), &podsWithSecrets, client.MatchingFields{"spec.volumes.secret.secretName": mySecretName})
fmt.Println("----")
}
func main() {
ExampleNew()
ExampleFieldIndexer_secretName()
}
something like this should get your work done I think. Check and let me know if this address your requirement. I will open a PR to update the same
I don't think your code sampIe works, if you check the error return by the list function in ExampleFieldIndexer_secretName() you will get an error like that :
field label not supported: spec.volumes.secret.secretName
@quentinalbertone I did not run the snippet though. I just modified it to setup the right informer. For the supported field labels, you can check the pkg/apis/core/v1/conversion.go in the k/k that should tell you what works. kubectl get pods -n <namespace> --field-selector <selector>=<value> If this doesn;t work the controller runtime won't do that either. @FillZpp or someone can confirm if I am wrong on this.
NOw that I read the selector, don't think it will work. spec.volumes.secret.secretName is not even a valid JQ path is it ? volumes in an array of values.
kubectl get pods -n ns -o json | jq '.items[].spec.volumes[] | select(.secret != null)'
is there a precedence for how these are handled in this repo before ? Especially the Example*** functions. Since there is no // Output: they are not running at all. So we are missing validation of the code that actually goes into those functions.
cc @FillZpp @alvaroaleman @vincepri
spec.volumes.secret.secretName is the name of the indexer in the example, it can be an arbitrary string. The actual indexing that enables usage of this is func that is passed in.
Please do feel free to post a PR that extends the godoc if you feel it is not self-explanatory.
@quentinalbertone @harshanarayana Ah, I think you may misunderstand the controller-runtime client here.
The client you directly create with client.New() will always do both read and write requests to API Server instead of the local cache, even if you create a cache using this client.
So obviously, API Server can not recognize your custom field selector.
thanks @FillZpp, it's clearer for me now. Could you give me some documentation, advice so that I can move forward on this subject and do it properly ?
@quentinalbertone Two simple ways:
(1) use the cache created by client:
package main
import (
"context"
"fmt"
"os"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/client"
)
var (
cl client.Client
someCache cache.Cache
)
func init() {
var err error
cfg := config.GetConfigOrDie()
cl, err = client.New(cfg, client.Options{})
if err != nil {
fmt.Println("failed to create client")
os.Exit(1)
}
someCache, err = cache.New(cfg, cache.Options{Namespace: "kube-system", Scheme: cl.Scheme()})
if err != nil {
fmt.Println("failed to create cache")
os.Exit(1)
}
// Index should be registered before cache start
_ = someCache.IndexField(context.TODO(), &corev1.Pod{}, "spec.volumes.secret.secretName", func(o client.Object) []string {
var res []string
for _, vol := range o.(*corev1.Pod).Spec.Volumes {
if vol.Secret == nil {
continue
}
// just return the raw field value -- the indexer will take care of dealing with namespaces for us
res = append(res, vol.Secret.SecretName)
}
return res
})
go func() {
if err = someCache.Start(context.Background()); err != nil {
fmt.Println("failed to start cache")
os.Exit(1)
}
}()
}
func getFromAPIServer() {
var err error
podList := &corev1.PodList{}
err = cl.List(context.Background(), podList, client.InNamespace("kube-system"))
if err != nil {
fmt.Printf("failed to list pods in namespace default: %v\n", err)
os.Exit(1)
}
fmt.Printf("Get Pods from client: %v\n", len(podList.Items))
}
func getFromCacheWithIndex() {
// no need to wait cache synced, for it will be waited in List
var podsWithSecrets corev1.PodList
_ = someCache.List(context.Background(), &podsWithSecrets, client.MatchingFields{"spec.volumes.secret.secretName": "someSecret"})
fmt.Printf("Get Pods from cache: %v\n", len(podsWithSecrets.Items))
}
func main() {
getFromAPIServer()
getFromCacheWithIndex()
}
(2) use the client got from Manager or Cluster, which is actually a delegatingClient that read from cache and write to API Server:
package main
import (
"context"
"fmt"
"os"
corev1 "k8s.io/api/core/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/cluster"
)
var (
someCluster cluster.Cluster
)
func init() {
var err error
cfg := config.GetConfigOrDie()
someCluster, err = cluster.New(cfg, func(opts *cluster.Options) {
opts.Namespace = "kube-system"
})
if err != nil {
fmt.Println("failed to create cluster")
os.Exit(1)
}
// Index should be registered before cache start
_ = someCluster.GetCache().IndexField(context.TODO(), &corev1.Pod{}, "spec.volumes.secret.secretName", func(o client.Object) []string {
var res []string
for _, vol := range o.(*corev1.Pod).Spec.Volumes {
if vol.Secret == nil {
continue
}
// just return the raw field value -- the indexer will take care of dealing with namespaces for us
res = append(res, vol.Secret.SecretName)
}
return res
})
go func() {
if err = someCluster.Start(context.Background()); err != nil {
fmt.Println("failed to start cluster")
os.Exit(1)
}
}()
}
func getFromAPIServer() {
var err error
podList := &corev1.PodList{}
err = someCluster.GetAPIReader().List(context.Background(), podList, client.InNamespace("kube-system"))
if err != nil {
fmt.Printf("failed to list pods in namespace default: %v\n", err)
os.Exit(1)
}
fmt.Printf("Get Pods from client: %v\n", len(podList.Items))
}
func getFromCacheWithIndex() {
// no need to wait cache synced, for it will be waited in List
var podsWithSecrets corev1.PodList
// or someCluster.GetCache()
_ = someCluster.GetClient().List(context.Background(), &podsWithSecrets, client.MatchingFields{"spec.volumes.secret.secretName": "someSecret"})
fmt.Printf("Get Pods from cache: %v\n", len(podsWithSecrets.Items))
}
func main() {
getFromAPIServer()
getFromCacheWithIndex()
}
It's great thanks a lot @FillZpp ! What dou you think about the idea of changing this example https://pkg.go.dev/sigs.k8s.io/[email protected]/pkg/client?utm_source=gopls#example-Client-List ? It looks wrong and can be misunderstood. I can do it but I don't think it's fair since you gave the answer to me.
It looks wrong and can be misunderstood.
Emm.. I don't think it is wrong, since it only gives an example of listing typed and unstructured objects from APIServer.
I can do it but I don't think it's fair since you gave the answer to me.
That's ok, feel free to post a PR that extends the godoc if need.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.