keel icon indicating copy to clipboard operation
keel copied to clipboard

Helm provider: Polling a repository from behind a corporate proxy

Open mrsimpson opened this issue 6 years ago • 37 comments

I have a helm chart pulling an image from a private docker repository. An image pull secret has been specified in the chart, it has also been added to the values.yaml in the keel property:

keel:
  # keel policy (all/major/minor/patch/force)
  policy: force
    # trigger type, defaults to events such as pubsub, webhooks
  trigger: pull
  # polling schedule
  pollSchedule: "@every 2m"
  # images to track and update
  images:
    - repository: image.repository
      tag: image.tag
      imagePullSecret: image.imagePullSecret

However, It seems as if Keel was not able to see the image: time="2019-01-23T10:11:26Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm

The image pull secret (of course) resides in the namespace of the deployment. Since it's used by multiple deployments, it's not part of a chart (and thus does not feature labels).

When triggering the webhook from Dockerhub, I can see some more strange log I cannot understand:

time="2019-01-23T09:58:58Z" level=info msg="provider.kubernetes: processing event" registry= repository=assistify/operations tag=latest
time="2019-01-23T09:58:58Z" level=info msg="provider.kubernetes: no plans for deployment updates found for this event" image=assistify/operations tag=latest

Have I got something completely wrong?

mrsimpson avatar Jan 23 '19 11:01 mrsimpson

Hi, seems like Keel can't talk to your helm tiller service. Which namespace have you used to deploy keel?

rusenask avatar Jan 23 '19 11:01 rusenask

kube-system

| => kubectl get service -n kube-system
NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                       AGE
keel                      ClusterIP   10.101.76.90     <none>        80/TCP                        25m
kube-dns                  ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP                 43d
kubernetes-dashboard      ClusterIP   10.103.78.171    <none>        443/TCP                       42d
tiller-deploy             ClusterIP   10.110.118.148   <none>        44134/TCP                     41d

Can I somehow very connectivity from inside the keel-pod?

mrsimpson avatar Jan 23 '19 11:01 mrsimpson

Maybe I should add I set env-variables HTTP*_PROXY on the Keel pod in order to be able to poll via our corporate proxy. The proxy however is inactive within the cluster (exception for 10.* et. al.)

tinyproxy-conf

no upstream ".kube-system"
no upstream ".default"
no upstream ".utils"
no upstream "10.0.0.0/8"
no upstream "172.16.0.0/12"

mrsimpson avatar Jan 23 '19 11:01 mrsimpson

The problem here is that keel can't connect to tiller so it doesn't get image list which it should start tracking. There's an env variable:

TILLER_ADDRESS

that defaults to tiller-deploy:44134. It does seem to be correct service though. Could there be something inside your cluster that prevents it from calling tiller service?

rusenask avatar Jan 23 '19 11:01 rusenask

Yes, I verified it somehow is the proxy bothering me. I removed it and now am not able to poll, but I don't get the error message msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm anymore. 👍

The error messages error msg="provider.helm: failed to get config for release" error="policy not specified" namespace=kube-system release=... indicate Keel can talk to Tiller now.

Polling however is not possible now. Any in-built-option to poll behind a proxy?

mrsimpson avatar Jan 23 '19 12:01 mrsimpson

That error is not from the HTTP client that tries to get images, it just indicates that there was no policy in the chart that it found. From registry client I would expect to see HTTP errors trying to contact the registry.

Maybe instead of tiller-deploy:port you could specify IP address of the tiller? Have never tried it, but this might work (or whatever the tiller-deploy IP resolves to):

TILLER_ADDRESS=10.110.118.148:44134

rusenask avatar Jan 23 '19 12:01 rusenask

@rusenask maybe I should be a bit more verbose, I think I understood most of it now.

Our setting is a cluster in a VPC behind a corporate proxy. We have a private repository on Dockerhub. I wanted to poll Dockerhub instead of exposing a public webhook, so I configured keel as per the documentation. When doing this (Keel based on Helm chart version 0.7.6), I get the following error log:

time="2019-01-23T12:43:26Z" level=error msg="trigger.poll.RepositoryWatcher.Watch: failed to add image watch job" error="Get https://index.docker.io/v2/assistify/operations/manifests/latest: dial tcp: lookup index.docker.io on 10.96.0.10:53: no such host" image="namespace:assistify,image:index.docker.io/assistify/operations,provider:helm,trigger:poll,sched:@every 2m,secrets:[assistify-private-registry]"
time="2019-01-23T12:43:26Z" level=error msg="trigger.poll.manager: got error(-s) while watching images" error="encountered errors while adding images: Get https://index.docker.io/v2/assistify/operations/manifests/latest: dial tcp: lookup index.docker.io on 10.96.0.10:53: no such host"

I assumed this to being an issue of the corporate proxy, so I modified the Keel helm chart locally so that the keel deployment gets values which it propagates to the environment variables HTTP*_PROXY. After I did this, it seems as if Keel was not able to talk to Tiller anymore. I verified the proxy which runs inside our cluster: This proxy should establish direct connections within the cluster and route to the corporate proxy for other resources.

But no matter how I specify the TILLER_ADDRESS and though this actually matches exceptions on the proxy conf, I get msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm anymore. which as far as I understood indicates non-connectivity to Tiller.

I also tried to set NO_PROXY env variable on the Keel-pod, but this has no effect either.

So instead of specifying the proxy via environment (which obviously troubles the Keel-Tiller-connectivity), is there a way to poll registries from behind a proxy?

mrsimpson avatar Jan 23 '19 12:01 mrsimpson

You can use my other project webhookrelay.com: https://keel.sh/v1/guide/documentation.html#Receiving-webhooks-without-public-endpoint

It works by creating a connection to the cloud service and any webhooks are streamed back to the internal network on top of that tunnel and through a sidecar it would just call keel on http://localhost:9300/v1/webhooks/dockerhub endpoint. It provides additional security when compared to just exposing your service to the internet by only allowing one-way traffic and only to a specific server & path. There's a free tier of 150 webhooks/month but I can bump it up a bit if you like it.

rusenask avatar Jan 23 '19 13:01 rusenask

Yup, I have seen that as well. I first wanted to go for a self-sustained solution - simply since ordering SaaS in our company is a burden I’m not willing to take... Any chance to get polling enabled?

mrsimpson avatar Jan 23 '19 13:01 mrsimpson

well, one option is to make your proxy always route to the tiller-deploy for tiller queries. Another, easier option would be to use k8s provider instead of helm. Just add keel policy to your chart's deployment.yaml template annotations and disable helm provider altogether. Then, the only queries to outside world will be for the registry.

rusenask avatar Jan 23 '19 13:01 rusenask

Ah, this means I can set http_proxy - and the only thing I’ll lose is the decoupling of deployment and keel-config. Sounds like a plan, let me check later. Thanks for the awesome support and the amazing tool!

mrsimpson avatar Jan 23 '19 15:01 mrsimpson

using the k8s provider works as expected 👍 However, i'm keeping this open with the changed subject: Proxy support should somehow be worked out.

Nevertheless: Awesome work, @rusenask 🎉

mrsimpson avatar Jan 24 '19 23:01 mrsimpson

Hi.

I also get this issue:

time="2019-03-06T16:51:07Z" level=debug msg="tracked images" images="[]"
time="2019-03-06T16:51:07Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-06T16:51:12Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm

My tiller is running ok but not sure why we can't connect to it

screen shot 2019-03-06 at 19 15 10

My connection endpoint:

tillerAddress: "tiller-deploy.kube-system:44134"

botzill avatar Mar 06 '19 17:03 botzill

is keel in the same namespace?

rusenask avatar Mar 06 '19 17:03 rusenask

screen shot 2019-03-06 at 19 41 23

Yes it is, both in kube-system

botzill avatar Mar 06 '19 17:03 botzill

So, as I understand I don't need to specify any

keel.sh/policy: major
keel.sh/trigger: poll  

by default, helm provider will watch all the images deployed via helm charts?

botzill avatar Mar 07 '19 14:03 botzill

yes, as long as there's a keel config in the values.yaml of your chart: https://keel.sh/v1/guide/documentation.html#Helm-example

rusenask avatar Mar 07 '19 14:03 rusenask

Well yes it's enabled, here is it:

keel:
  # keel policy (all/major/minor/patch/force)
  policy: all
  # trigger type, defaults to events such as pubsub, webhooks
  trigger: poll
  # polling schedule
  pollSchedule: "@every 3m"
  # images to track and update
#  images:
#    - repository: image.repository
#      tag: image.tag

botzill avatar Mar 07 '19 14:03 botzill

in this case it won't track anything, those

#  images:
#    - repository: image.repository
#      tag: image.tag

shouldn't be commented out and they should be targetting the other helm variables that have image name and tag :)

rusenask avatar Mar 07 '19 14:03 rusenask

Yes, I thought that this is the reason and tried without them.

I did add those back but still, same errors.

time="2019-03-07T14:41:59Z" level=debug msg="added deployment kube-eagle" context=translator
time="2019-03-07T14:41:59Z" level=debug msg="added deployment tiller-deploy" context=translator
time="2019-03-07T14:41:59Z" level=debug msg="added deployment external-dns" context=translator
time="2019-03-07T14:41:59Z" level=debug msg="added deployment nginx-ingress-controller" context=translator
time="2019-03-07T14:41:59Z" level=debug msg="added deployment kubernetes-replicator" context=translator
time="2019-03-07T14:42:02Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:02Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:05Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:10Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:10Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:10Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:15Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:15Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:15Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:20Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:20Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:20Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:25Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:25Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:25Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:30Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:30Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:30Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:35Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:35Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:35Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:40Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:40Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:40Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:45Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:45Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:45Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:50Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:50Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:50Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:42:55Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:42:55Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:42:55Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:43:00Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:43:00Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:43:00Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:43:05Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-07T14:43:05Z" level=debug msg="tracked images" images="[]"
time="2019-03-07T14:43:05Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-07T14:43:10Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm

So, these options are only for auto update? Can I set approval in this section as well?

botzill avatar Mar 07 '19 14:03 botzill

So, the added time="2019-03-07T14:41:59Z" level=debug msg="added part indicates that it was connected to tiller. But still, I have this errors

level=debug msg="tracked images" images="[]"
time="2019-03-07T14:43:05Z" level=debug msg="trigger.poll.manager: performing scan"

Tested with this config:

keel:
  # keel policy (all/major/minor/patch/force)
  policy: all
  # trigger type, defaults to events such as pubsub, webhooks
  trigger: poll
  # polling schedule
  pollSchedule: "@every 1m"
  # approvals required to proceed with an update
  approvals: 1
  # approvals deadline in hours
  approvalDeadline: 24
  # images to track and update
  images:
    - repository: image.repository
      tag: image.tag

botzill avatar Mar 07 '19 15:03 botzill

Maybe there is no updates for any images? Should I see any log which indicates that all are uptodate? Or do I receive a message in slack about uptodate?

Thx.

botzill avatar Mar 07 '19 18:03 botzill

Hi, no, unfortunately it seems that Keel cannot connect to Helm:

time="2019-03-07T14:43:00Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=hel

Could there be some internal network restrictions? Do you use a proxy internally that Keel has to use as well?

rusenask avatar Mar 07 '19 18:03 rusenask

I think there are no restrictions, here is how I deploy it:

https://github.com/botzill/terraform-tinfoil-tiller/blob/master/main.tf

service

kube-system    tiller-deploy                       ClusterIP      10.245.156.186   <none>         44134/TCP                                                                  10h
k describe service tiller-deploy -n kube-system
Name:              tiller-deploy
Namespace:         kube-system
Labels:            app=helm
                   name=tiller
Annotations:       <none>
Selector:          app=helm,name=tiller
Type:              ClusterIP
IP:                10.245.156.186
Port:              tiller  44134/TCP
TargetPort:        tiller/TCP
Endpoints:         10.244.93.2:44134,10.244.93.4:44134
Session Affinity:  None
Events:            <none>

Thx.

botzill avatar Mar 07 '19 18:03 botzill

Is there any way I can debug this and see what is going on?

botzill avatar Mar 08 '19 07:03 botzill

Looks like changing the tiller address (removing the cluster.local suffix) fixes the problem:

--set-string helmProvider.tillerAddress="tiller-deploy.kube-system:44134"

klrservices avatar Mar 27 '19 15:03 klrservices

Hi @klrservices I do change this and still not working.

@rusenask I'm running this on a DigitalOcean k8s cluster. I see that they use, out of the box, https://cilium.io/. Could this be an issue why it can't connect? I really want to make it work.

Thx.

botzill avatar Mar 29 '19 08:03 botzill

unlikely, if both keel and tiller are in the same namespace, it should be reachable. Can you try

--set-string helmProvider.tillerAddress="tiller-deploy:44134"

?

rusenask avatar Mar 29 '19 09:03 rusenask

Thx @rusenask

I did try with that new address but still having this issues:

time="2019-03-29T10:30:43Z" level=info msg="extension.credentialshelper: helper registered" name=aws
time="2019-03-29T10:30:43Z" level=info msg="bot: registered" name=slack
time="2019-03-29T10:30:43Z" level=info msg="keel starting..." arch=amd64 build_date=2019-02-06T223140Z go_version=go1.10.3 os=linux revision=0944517e version=0.13.1
time="2019-03-29T10:30:43Z" level=info msg="extension.notification.slack: sender configured" channels="[k8s-stats]" name=slack
time="2019-03-29T10:30:43Z" level=info msg="notificationSender: sender configured" sender name=slack
time="2019-03-29T10:30:43Z" level=info msg="provider.kubernetes: using in-cluster configuration"
time="2019-03-29T10:30:43Z" level=info msg="provider.helm: tiller address 'tiller-deploy:44134' supplied"
time="2019-03-29T10:30:43Z" level=info msg="provider.defaultProviders: provider 'kubernetes' registered"
time="2019-03-29T10:30:43Z" level=info msg="provider.defaultProviders: provider 'helm' registered"
time="2019-03-29T10:30:43Z" level=info msg="extension.credentialshelper: helper registered" name=secrets
time="2019-03-29T10:30:43Z" level=info msg="trigger.poll.manager: polling trigger configured"
time="2019-03-29T10:30:43Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:30:43Z" level=info msg="webhook trigger server starting..." port=9300
time="2019-03-29T10:30:44Z" level=info msg=started context=buffer
time="2019-03-29T10:30:44Z" level=info msg=started context=watch resource=deployments
time="2019-03-29T10:30:44Z" level=info msg=started context=watch resource=cronjobs
time="2019-03-29T10:30:44Z" level=info msg=started context=watch resource=daemonsets
time="2019-03-29T10:30:44Z" level=info msg=started context=watch resource=statefulsets
time="2019-03-29T10:30:45Z" level=debug msg="added cronjob mongodb-backup-job" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment cert-manager" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment hpa-operator-kube-metrics-adapter" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment kubernetes-dashboard" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment cilium-operator" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment coredns" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment cert-manager-cainjector" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment nginx-ingress-default-backend" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment tiller-deploy" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment some-api" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment external-dns" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment jobs-seeker" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment kube-eagle" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment prometheus-grafana" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment cert-manager-webhook" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment hpa-operator-hpa-operator" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment kubernetes-replicator" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment metrics-server" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment kubedb-operator" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment longhorn-driver-deployer" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment longhorn-ui" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment prometheus-kube-state-metrics" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment prometheus-prometheus-oper-operator" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added deployment keel" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset mongodb-primary" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset mongodb-secondary" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset alertmanager-prometheus-prometheus-oper-alertmanager" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset prometheus-prometheus-prometheus-oper-prometheus" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset invoiceninja" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset mysqldb" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added statefulset mongodb-arbiter" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset cilium" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset csi-do-node" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset do-node-agent" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset kube-proxy" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset engine-image-ei-6e2b0e32" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset longhorn-manager" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset prometheus-prometheus-node-exporter" context=translator
time="2019-03-29T10:30:45Z" level=debug msg="added daemonset nginx-ingress-controller" context=translator
time="2019-03-29T10:30:48Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:30:48Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:30:51Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:30:56Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:30:56Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:30:56Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:01Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:01Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:01Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:06Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:06Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:06Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:11Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:11Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:11Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:16Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:16Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:16Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:21Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:21Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:21Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:26Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:26Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:26Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:31Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:31Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:31Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:36Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:36Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:36Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:41Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:41Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:41Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:46Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:46Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:46Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:51Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:51Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:51Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:31:56Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:31:56Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:31:56Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:32:01Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:32:01Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:32:01Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:32:06Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:32:06Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:32:06Z" level=debug msg="trigger.poll.manager: performing scan"
time="2019-03-29T10:32:11Z" level=error msg="provider.defaultProviders: failed to get tracked images" error="context deadline exceeded" provider=helm
time="2019-03-29T10:32:11Z" level=debug msg="tracked images" images="[]"
time="2019-03-29T10:32:11Z" level=debug msg="trigger.poll.manager: performing scan"

botzill avatar Mar 29 '19 10:03 botzill

Hi @rusenask, any other hints where I could check for this?

botzill avatar Apr 03 '19 09:04 botzill