katib icon indicating copy to clipboard operation
katib copied to clipboard

Metrics collector fails to create watcher

Open gigabyte132 opened this issue 1 year ago • 10 comments

What happened?

I tried running the example enas-cpu experiment with a StdOut collector and the experiment fails to run due to an error in the metrics-collector container

2024/09/24 13:19:08 FATAL -- failed to create Watcher
goroutine 18 [running]:
runtime/debug.Stack()
        /usr/local/go/src/runtime/debug/stack.go:26 +0x5e
github.com/nxadm/tail/util.Fatal({0xe14a9b?, 0xc000282000?}, {0x0, 0x0, 0x0})
        /go/pkg/mod/github.com/nxadm/[email protected]/util/util.go:23 +0x8b
github.com/nxadm/tail/watch.(*InotifyTracker).run(0xc0002b6000)
        /go/pkg/mod/github.com/nxadm/[email protected]/watch/inotify_tracker.go:220 +0x68
created by github.com/nxadm/tail/watch.init.func1 in goroutine 17
        /go/pkg/mod/github.com/nxadm/[email protected]/watch/inotify_tracker.go:55 +0x14e 

There is a related issue to this https://github.com/kubeflow/katib/issues/1769 . Since then katib has migrated from using the hpcloud library for tailing to nxadm but it seems like I ran into the same exact issue regardless. This is with the following version of the metrics collector image https://hub.docker.com/layers/kubeflowkatib/file-metrics-collector/v1beta1-867c40a/images/sha256-3ab68e0932dd6c2028592dd7a7443ba4970e54f91ab145d6d35828112780eb0a?context=explore as the change to nxadm wasn't included in the 0.17 release. I have tried both the 0.16 and 0.17 images as well, but the result was the same. I haven't had more time to debug this more in depth (e.g building my own image with extra logs, etc).

What did you expect to happen?

The metrics collector should work normally, I have tried using the File metrics collector and things seem to be fine, although I haven't managed to get any Katib Experiment of any kind working with the StdOut one.

Environment

Kubernetes version:

$ kubectl version
Client Version: v1.29.6
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.2

Katib controller version:

$ kubectl get pods -n kubeflow -l katib.kubeflow.org/component=controller -o jsonpath="{.items[*].spec.containers[*].image}"
kubeflow/kubeflowkatib/katib-controller:v0.16.0

Katib Python SDK version:

$ pip show kubeflow-katib
Name: kubeflow-katib
Version: 0.17.0

Impacted by this bug?

Give it a 👍 We prioritize the issues with most 👍

gigabyte132 avatar Sep 25 '24 10:09 gigabyte132

Thanks for creating this issue @gigabyte132! @tariq-hasan @Electronic-Waste Please can you help us to explore this issue ?

/remove-label lifecycle/needs-triage /area backend

andreyvelich avatar Sep 25 '24 11:09 andreyvelich

I can't reproduce the result with https://github.com/kubeflow/katib/blob/master/examples/v1beta1/nas/enas-cpu.yaml

It turned out to be completed successfully in my environment:

$ kubectl get experiment -n kubeflow 
NAME       TYPE        STATUS   AGE
enas-cpu   Succeeded   True     27m

@gigabyte132 For your reference, my setup environment is:

Kuberentes version:

$ kubectl version
Client Version: v1.30.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.1

Katib controller version:

$ kubectl get pods -n kubeflow -l katib.kubeflow.org/component=controller -o jsonpath="{.items[*].spec.containers[*].image}"
docker.io/kubeflowkatib/katib-controller:latest

Katib Python SDK version:

$ pip show kubeflow-katib
Name: kubeflow-katib
Version: 0.17.0

Maybe you can upgrade the version of katib-controller to latest and try it again?

cc @andreyvelich @tariq-hasan

Electronic-Waste avatar Sep 28 '24 05:09 Electronic-Waste

One other thing I was just curious about is if @gigabyte132 saw this error only for the enas-cpu experiment or if the error was also seen for other experiments such as darts-cpu and file-metrics-collector.

vector-flow avatar Oct 01 '24 09:10 vector-flow

@tariq-hasan for me, any type of experiment that uses the file-metrics-collector fails with this error

gigabyte132 avatar Oct 01 '24 12:10 gigabyte132

Unfortunately, nxadm and hpcloud both dont log the error when the fsnotify.Watcher is created. I used a custom go image to run

package main

import (
	"fmt"

	"github.com/fsnotify/fsnotify"
)

func main() {
	_, err := fsnotify.NewWatcher()
	if err != nil {
		fmt.Printf("failed to create Watcher %v", err)
	}
}

to see what error is logged when this is run in the experiment container. Similar as to the metrics container, it failed. the error message was failed to create Watcher Too many open files(base). Which is really strange as in my case I had less open files than the limit. Checked with lsof and ulimit -n

hahahannes avatar Nov 21 '24 09:11 hahahannes

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Feb 19 '25 10:02 github-actions[bot]

This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it.

github-actions[bot] avatar Mar 11 '25 10:03 github-actions[bot]

/reopen

Electronic-Waste avatar Mar 11 '25 10:03 Electronic-Waste

@Electronic-Waste: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

google-oss-prow[bot] avatar Mar 11 '25 10:03 google-oss-prow[bot]

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Jun 09 '25 15:06 github-actions[bot]

This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it.

github-actions[bot] avatar Jun 29 '25 20:06 github-actions[bot]