vcluster icon indicating copy to clipboard operation
vcluster copied to clipboard

"go build ... signal: killed" while starting vcluster via devspace like explained in CONTRIBUTING.md

Open guettli opened this issue 3 years ago • 4 comments

What happened?

I installed vcluster like explained in CONTRIBUTING.md in minikube, then:

vcluster-0:vcluster-dev$ go run -mod vendor cmd/vcluster/main.go start

go build k8s.io/client-go/tools/events: /usr/local/go/pkg/tool/linux_amd64/compile: signal: killed
go build k8s.io/kubectl/pkg/cmd/util: /usr/local/go/pkg/tool/linux_amd64/compile: signal: killed
go build k8s.io/client-go/informers/core/v1: /usr/local/go/pkg/tool/linux_amd64/compile: signal: killed
go build k8s.io/kubectl/pkg/describe: /usr/local/go/pkg/tool/linux_amd64/compile: signal: killed
go build k8s.io/client-go/informers/rbac/v1: /usr/local/go/pkg/tool/linux_amd64/compile: signal: killed

Here the details of the installation process

vcluster on  main via 🐹 v1.18.2 
❯ devspace run dev
[info]   Using namespace 'vcluster'
[info]   Using kube context 'minikube'
[done] √ Created namespace: vcluster
[info]   Rebuild image  because tag is missing                      
[info]   Building image 'ghcr.io/loft-sh/loft-enterprise/dev-vcluster:jGmvxHd' with engine 'buildkit'
[info]   Execute BuildKit command with: docker buildx build --tag ghcr.io/loft-sh/loft-enterprise/dev-vcluster:jGmvxHd --file Dockerfile --target builder -
[+] Building 0.8s (0/1)                                                                                                    
[+] Building 104.1s (18/18) FINISHED                                                                                       
 => [internal] load remote build context                                                                              0.8s
 => copy /context /                                                                                                   0.3s
 => [internal] load metadata for docker.io/library/golang:1.18                                                        1.8s
 => [builder  1/14] FROM docker.io/library/golang:1.18@sha256:1bbb02af44e5324a6eabe502b6a928d368977225c0255bc9aca4a  52.1s
 => => resolve docker.io/library/golang:1.18@sha256:1bbb02af44e5324a6eabe502b6a928d368977225c0255bc9aca4a734145f86e1  0.0s
 => => sha256:1bbb02af44e5324a6eabe502b6a928d368977225c0255bc9aca4a734145f86e1 2.35kB / 2.35kB                        0.0s
 => => sha256:d9d4b9b6e964657da49910b495173d6c4f0d9bc47b3b44273cf82fd32723d165 5.16MB / 5.16MB                        3.0s
 => => sha256:2068746827ec1b043b571e4788693eab7e9b2a95301176512791f8c317a2816a 10.88MB / 10.88MB                      7.8s
 => => sha256:a4081692fa3015104d84b125cbf623c566a0c9e2a39b9a29f6d939225d5687a4 1.80kB / 1.80kB                        0.0s
 => => sha256:2d952adaec1e94a7d3920f0f848dfd9037244f2491e499152918f30956c942b0 7.10kB / 7.10kB                        0.0s
 => => sha256:001c52e26ad57e3b25b439ee0052f6692e5c0f2d5d982a00a8819ace5e521452 55.00MB / 55.00MB                     22.8s
 => => sha256:9daef329d35093868ef75ac8b7c6eb407fa53abbcb3a264c218c2ec7bca716e6 54.58MB / 54.58MB                     21.0s
 => => sha256:1c28274a8e7c4c48dd6843a6c33a0192271cfc7ef94f059ef7d70c4b60da6702 85.90MB / 85.90MB                     41.2s
 => => sha256:6b3e8fc138ed24bdd28760005afdd15c34a7a0967fa0e1c3fe472bb3de29f2c6 141.86MB / 141.86MB                   49.9s
 => => sha256:f3efab82b0e6781916704657c07fcd27f758c051a79c51966805759586275493 156B / 156B                           23.0s
 => => extracting sha256:001c52e26ad57e3b25b439ee0052f6692e5c0f2d5d982a00a8819ace5e521452                             0.7s
 => => extracting sha256:d9d4b9b6e964657da49910b495173d6c4f0d9bc47b3b44273cf82fd32723d165                             0.1s
 => => extracting sha256:2068746827ec1b043b571e4788693eab7e9b2a95301176512791f8c317a2816a                             0.1s
 => => extracting sha256:9daef329d35093868ef75ac8b7c6eb407fa53abbcb3a264c218c2ec7bca716e6                             0.7s
 => => extracting sha256:1c28274a8e7c4c48dd6843a6c33a0192271cfc7ef94f059ef7d70c4b60da6702                             0.9s
 => => extracting sha256:6b3e8fc138ed24bdd28760005afdd15c34a7a0967fa0e1c3fe472bb3de29f2c6                             1.8s
 => => extracting sha256:f3efab82b0e6781916704657c07fcd27f758c051a79c51966805759586275493                             0.0s
 => [builder  2/14] WORKDIR /vcluster-dev                                                                             0.5s
 => [builder  3/14] RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.  7.1s
 => [builder  4/14] RUN curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-  3.4s
 => [builder  5/14] RUN if [ "amd64" = "amd64" ]; then go install github.com/go-delve/delve/cmd/dlv@latest; fi       10.1s 
 => [builder  6/14] COPY go.mod go.mod                                                                                0.0s 
 => [builder  7/14] COPY go.sum go.sum                                                                                0.0s 
 => [builder  8/14] COPY vendor/ vendor/                                                                              0.4s 
 => [builder  9/14] COPY cmd/vcluster cmd/vcluster                                                                    0.1s
 => [builder 10/14] COPY cmd/vclusterctl cmd/vclusterctl                                                              0.0s
 => [builder 11/14] COPY pkg/ pkg/                                                                                    0.0s
 => [builder 12/14] RUN ln -s "$(pwd)/manifests" /manifests                                                           0.3s
 => [builder 13/14] RUN mkdir -p /.cache /.config                                                                     0.4s
 => [builder 14/14] RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 GO111MODULE=on go build -mod vendor -o /vcluster cmd/  24.1s
 => exporting to image                                                                                                2.4s
 => => exporting layers                                                                                               2.4s
 => => writing image sha256:305793c986cdc5a9ea8542a90c886b8c661c7819c33e4c60ea7c7326eb9f06c5                          0.0s
 => => naming to ghcr.io/loft-sh/loft-enterprise/dev-vcluster:jGmvxHd                                                 0.0s
[done] √ Done processing image 'ghcr.io/loft-sh/loft-enterprise/dev-vcluster'
[info]   Execute 'helm upgrade vcluster --namespace vcluster --values /tmp/650786612 --install ./charts/k3s --kube-context minikube'
[done] √ Deployed helm chart (Release revision: 1)                   
[done] √ Successfully deployed vcluster with helm                    
                                             
#########################################################
[info]   DevSpace UI available at: http://localhost:8090
#########################################################

[0:sync] Waiting for containers to start...
[0:ports] Port-Forwarding: Waiting for containers to start...
[0:sync] Warning: Pod vcluster-0: 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims. (FailedScheduling)
[0:sync] DevSpace is waiting, because Pod vcluster-0 has status: ContainerCreating
[0:sync] Starting sync...
[0:ports] Port forwarding started on 2346:2345 (vcluster/vcluster-0)
[0:sync] Sync started on /home/guettli/projects/vcluster <-> . (Pod: vcluster/vcluster-0)
[0:sync] Waiting for initial sync to complete
[info]   Terminal: Waiting for containers to start...
[info]   Opening shell to pod:container vcluster-0:syncer

   ____              ____
  |  _ \  _____   __/ ___| _ __   __ _  ___ ___
  | | | |/ _ \ \ / /\___ \| '_ \ / _` |/ __/ _ \
  | |_| |  __/\ V /  ___) | |_) | (_| | (_|  __/
  |____/ \___| \_/  |____/| .__/ \__,_|\___\___|
                          |_|

Welcome to your development container!
This is how you can work with it:
- Run `go run -mod vendor cmd/vcluster/main.go start` to start vcluster
- Run `devspace enter -n vcluster --pod vcluster-0 -c syncer` to create another shell into this container
- Run `kubectl ...` from within the container to access the vcluster if its started
- Files will be synchronized between your local machine and this container

If you wish to run vcluster in the debug mode with delve, run:
  `dlv debug ./cmd/vcluster/main.go --listen=0.0.0.0:2345 --api-version=2 --output /tmp/__debug_bin --headless --build-flags="-mod=vendor" -- start`
  Wait until the `API server listening at: [::]:2345` message appears
  Start the "Debug vcluster (localhost:2346)" configuration in VSCode to connect your debugger session.
  Note: vcluster won't start until you connect with the debugger.
  Note: vcluster will be stopped once you detach your debugger session.

TIP: hit an up arrow on your keyboard to find the commands mentioned above :) 

What did you expect to happen?

I expected to see vcluster starting

How can we reproduce it (as minimally and precisely as possible)?

Use a new minikube, then run the commands like above.

Anything else we need to know?

No response

Host cluster Kubernetes version


 kubectl version

Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.3", 
GitCommit:"aef86a93758dc3cb2c658dd9657ab4ad4afc21cb", GitTreeState:"clean", BuildDate:"2022-07-14T02:31:37Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4

Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}

Host cluster Kubernetes distribution

❯ minikube version

minikube version: v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7

vlcuster version

fresh from git

Vcluster Kubernetes distribution(k3s(default)), k8s, k0s)

default

OS and Arch

❯ cat /etc/os-release 

PRETTY_NAME="Ubuntu 21.10"
NAME="Ubuntu"
VERSION_ID="21.10"
VERSION="21.10 (Impish Indri)"
VERSION_CODENAME=impish
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=impish

guettli avatar Aug 05 '22 14:08 guettli

Strange, some minutes later I run the same command again, and it worked:

vcluster-0:vcluster-dev$ go run -mod vendor cmd/vcluster/main.go start
I0805 14:52:16.665238    7022 start.go:298] Start Plugins Manager...
I0805 14:52:16.666039    7022 plugin.go:164] Plugin server listening on localhost:10099
I0805 14:52:16.666249    7022 start.go:310] Using physical cluster at https://10.96.0.1:443
I0805 14:52:16.674024    7022 start.go:341] Can connect to virtual cluster with version v1.23.5+k3s1
...

Is there a reason why this failed at the first time?

I leave this issue open, feedback is welcome.

guettli avatar Aug 05 '22 14:08 guettli

I just bumped into the same issue for the first time ever(in like 9 month of vcluster development with devspace). It also got resolved by re-running. No idea what is causing this.

$ go run -mod vendor cmd/vcluster/main.go start
go build k8s.io/kubectl/pkg/describe: /usr/local/go/pkg/tool/linux_amd64/compile: signal: killed

$ go run -mod vendor cmd/vcluster/main.go start
I0805 15:58:19.832714    6423 start.go:298] Start Plugins Manager...
...

matskiv avatar Aug 05 '22 16:08 matskiv

@matskiv Thank you, now I know that I am not alone. I use devspace version 5.x. Do you think it makes sense for someone new to devspace and vcluster to use devspace 6.x (which is currently in beta)?

guettli avatar Aug 06 '22 13:08 guettli

@guettli I don't think it matters if you use devspace 5.x or 6.x at the moment. Currently, devspace.yaml uses configuration version that is supported by both 5.x and 6.x. I happen to be using devspace version 6.0.0-alpha.11. The 6.x beta is working well, and it is going to be promoted to 6.0.0 pretty soon.

matskiv avatar Aug 06 '22 21:08 matskiv

I had this happen with a different project, and it was caused by the OOM kill by the cgroup, because the pod had a quite low memory limit. The default memory limit for the syncer pod is 1Gi, so I doubt this would be a problem. Below is a command to check the cause if you are using a minikube with the VM driver, but for other setups, the steps will differ.

$ minikube ssh -- sudo dmesg | tail -n 50
...
[87042.185991] [ 161033]     0 161033   180088     3027   126976        0           985 compile
[87042.185992] Memory cgroup out of memory: Kill process 160878 (compile) score 1144 or sacrifice child
[87042.186049] Killed process 160878 (compile) total-vm:723280kB, anon-rss:39456kB, file-rss:12576kB, shmem-rss:0kB
[87042.188979] oom_reaper: reaped process 160878 (compile), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

matskiv avatar Aug 12 '22 19:08 matskiv