kube-scheduler-simulator
kube-scheduler-simulator copied to clipboard
Custom Plugin Implementation depricated?
Is it depricated since the simulator got moved to its own directory?
Tried to implement nodenumber in the config as follows:
import (
"sigs.k8s.io/kube-scheduler-simulator/simulator/docs/how-to-use-custom-plugins/nodenumber"
...
}
func OutOfTreeScorePlugins() []v1beta2.Plugin {
return []v1beta2.Plugin{
// Note: add your score plugins here.
{
Name: nodenumber.Name,
},
}
}
...
func OutOfTreeRegistries() runtime.Registry {
return runtime.Registry{
// Note: add your plugins registries here.
nodenumber.Name: nodenumber.New,
}
}
I get this error while building
executor failed running [/bin/sh -c go build -v -o ./bin/simulator simulator.go]: exit code: 1 make: *** [docker_build_server] Error 1
and didn't have this problem, at least with nodenumber, ~15 commits ago
/kind bug /assign
Let me check.
@JulianTS Could you show me the full error log?
How do I get this error log?
You got error while building simulator, so, I think you can see what the go build
complains by running make build
make build
doesn't have any output in my terminal
do I need to look into a specific dev log or activate a tool?
and all I get from make docker_build_and_up
is
...
#12 337.3 sigs.k8s.io/kube-scheduler-simulator/simulator/persistentvolume
#12 337.6 sigs.k8s.io/kube-scheduler-simulator/simulator/persistentvolumeclaim
#12 337.7 sigs.k8s.io/kube-scheduler-simulator/simulator/pod
#12 337.7 github.com/labstack/echo/v4/middleware
#12 337.9 sigs.k8s.io/kube-scheduler-simulator/simulator/priorityclass
#12 338.7 sigs.k8s.io/kube-scheduler-simulator/simulator/replicateexistingcluster
#12 338.7 sigs.k8s.io/kube-scheduler-simulator/simulator/reset
#12 338.7 k8s.io/client-go/dynamic/dynamiclister
#12 338.7 k8s.io/kubernetes/pkg/scheduler/internal/cache
#12 338.9 k8s.io/client-go/dynamic/dynamicinformer
#12 338.9 k8s.io/kubernetes/pkg/scheduler/internal/heap
#12 339.1 k8s.io/kubernetes/pkg/scheduler/profile
#12 339.1 k8s.io/kubernetes/pkg/scheduler/internal/queue
#12 339.3 sigs.k8s.io/kube-scheduler-simulator/simulator/scheduler/plugin/annotation
#12 339.3 sigs.k8s.io/kube-scheduler-simulator/simulator/scheduler/plugin/resultstore
#12 339.5 sigs.k8s.io/kube-scheduler-simulator/simulator/storageclass
#12 340.3 sigs.k8s.io/kube-scheduler-simulator/simulator/scheduler/plugin
#12 340.3 k8s.io/kubernetes/pkg/scheduler/internal/cache/debugger
#12 340.5 k8s.io/kubernetes/pkg/scheduler
#12 341.6 sigs.k8s.io/kube-scheduler-simulator/simulator/scheduler
#12 342.0 sigs.k8s.io/kube-scheduler-simulator/simulator/server/di
#12 342.4 sigs.k8s.io/kube-scheduler-simulator/simulator/server/handler
#12 342.6 sigs.k8s.io/kube-scheduler-simulator/simulator/server
------
executor failed running [/bin/sh -c go build -v -o ./bin/simulator simulator.go]: exit code: 1
make: *** [docker_build_server] Error 1
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.