[kubic-bugs] [Bug 1166999] New: 'kubicctl init' fails with Error invoking kubeadm: exit status 1
http://bugzilla.opensuse.org/show_bug.cgi?id=1166999 Bug ID: 1166999 Summary: 'kubicctl init' fails with Error invoking kubeadm: exit status 1 Classification: openSUSE Product: openSUSE Tumbleweed Version: Current Hardware: x86-64 OS: SUSE Other Status: NEW Severity: Normal Priority: P5 - None Component: Kubic Assignee: kubic-bugs@opensuse.org Reporter: ladislav.mate@gmail.com QA Contact: qa-bugs@suse.de Found By: --- Blocker: --- User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0 Build Identifier:
From logs what I think is important before 'kubeadm reset --force' is issued.
Mar 18 15:13:19 kubeadm kubelet[16176]: E0318 15:13:19.466964 16176 manager.go:1086] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15de0f7e22fd1290312d477b9efcd91d.slice/crio-8297a0b68df97dc26d0f37d8bdcc50039ccb0e70bdbfc8004f729163969ada81.scope: Error finding container 8297a0b68df97dc26d0f37d8bdcc50039ccb0e70bdbfc8004f729163969ada81: Status 404 returned error &{%!s(*http.body=&{0xc0011f62a0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f5f0) %!s(func() error=0x74f580)} Mar 18 15:13:19 kubeadm kubelet[16176]: E0318 15:13:19.467558 16176 manager.go:1086] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71b345beeb10c733ebd61851f0a38f4c.slice/crio-c0aed94b84d47d6a41783b3a4187e5db0ba6bef851f13a4100a4ad298b878157.scope: Error finding container c0aed94b84d47d6a41783b3a4187e5db0ba6bef851f13a4100a4ad298b878157: Status 404 returned error &{%!s(*http.body=&{0xc0008076e0 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f5f0) %!s(func() error=0x74f580)} Mar 18 15:13:19 kubeadm kubelet[16176]: E0318 15:13:19.468019 16176 manager.go:1086] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71b345beeb10c733ebd61851f0a38f4c.slice/crio-b46ce18b09597d91bf8cba339ea88c17700c23871864896eb50ee727192aa18a.scope: Error finding container b46ce18b09597d91bf8cba339ea88c17700c23871864896eb50ee727192aa18a: Status 404 returned error &{%!s(*http.body=&{0xc0011f6420 <nil> <nil> false false {0 0} false false false <nil>}) {%!s(int32=0) %!s(uint32=0)} %!s(bool=false) <nil> %!s(func(error) error=0x74f5f0) %!s(func() error=0x74f580)} Mar 18 15:13:24 kubeadm kubelet[16176]: W0318 15:13:24.254444 16176 prober.go:108] No ref for container "cri-o://37ff09f5038f1c428bccf2015629ac7def3ef9c6c4f6c908dbe0dbd9a926d896" (etcd-kubeadm_kube-system(add521f89bbc3fc01ff194af48568a77):etcd) Mar 18 15:13:24 kubeadm kubelet[16176]: I0318 15:13:24.254480 16176 prober.go:116] Liveness probe for "etcd-kubeadm_kube-system(add521f89bbc3fc01ff194af48568a77):etcd" failed (failure): HTTP probe failed with statuscode: 503 Mar 18 15:13:27 kubeadm kubelet[16176]: I0318 15:13:27.988365 16176 prober.go:116] Liveness probe for "kube-apiserver-kubeadm_kube-system(15de0f7e22fd1290312d477b9efcd91d):kube-apiserver" failed (failure): HTTP probe failed with statuscode: 500 Mar 18 15:13:36 kubeadm systemd[1]: crio-097e2c22c53ee0d065e67d5677ef71a6c6c1edfea90ad75f1fe48407be77cd5a.scope: Succeeded. Mar 18 15:13:36 kubeadm systemd[1]: crio-097e2c22c53ee0d065e67d5677ef71a6c6c1edfea90ad75f1fe48407be77cd5a.scope: Consumed 1.397s CPU time. Mar 18 15:13:37 kubeadm kubelet[16176]: I0318 15:13:37.638250 16176 prober.go:116] Liveness probe for "kube-apiserver-kubeadm_kube-system(15de0f7e22fd1290312d477b9efcd91d):kube-apiserver" failed (failure): HTTP probe failed with statuscode: 500 Mar 18 15:13:38 kubeadm systemd[1]: crio-conmon-097e2c22c53ee0d065e67d5677ef71a6c6c1edfea90ad75f1fe48407be77cd5a.scope: Succeeded. Mar 18 15:13:38 kubeadm kubelet[16176]: I0318 15:13:38.812370 16176 kubelet.go:1951] SyncLoop (PLEG): "kube-controller-manager-kubeadm_kube-system(ca2a9edec1631e0fee63d05d4afc1576)", event: &pleg.PodLifecycleEvent{ID:"ca2a9edec1631e0fee63d05d4afc1576", Type:"ContainerDied", Data:"097e2c22c53ee0d065e67d5677ef71a6c6c1edfea90ad75f1fe48407be77cd5a"} Mar 18 15:13:38 kubeadm kubelet[16176]: E0318 15:13:38.813675 16176 pod_workers.go:191] Error syncing pod ca2a9edec1631e0fee63d05d4afc1576 ("kube-controller-manager-kubeadm_kube-system(ca2a9edec1631e0fee63d05d4afc1576)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubeadm_kube-system(ca2a9edec1631e0fee63d05d4afc1576)" Mar 18 15:13:44 kubeadm kubelet[16176]: W0318 15:13:44.140345 16176 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease Mar 18 15:13:46 kubeadm kubelet[16176]: I0318 15:13:46.787636 16176 kubelet.go:1984] SyncLoop (container unhealthy): "kube-controller-manager-kubeadm_kube-system(ca2a9edec1631e0fee63d05d4afc1576)" Mar 18 15:13:46 kubeadm kubelet[16176]: E0318 15:13:46.788900 16176 pod_workers.go:191] Error syncing pod ca2a9edec1631e0fee63d05d4afc1576 ("kube-controller-manager-kubeadm_kube-system(ca2a9edec1631e0fee63d05d4afc1576)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-kubeadm_kube-system(ca2a9edec1631e0fee63d05d4afc1576)" Mar 18 15:13:47 kubeadm kubelet[16176]: I0318 15:13:47.541420 16176 prober.go:116] Liveness probe for "kube-apiserver-kubeadm_kube-system(15de0f7e22fd1290312d477b9efcd91d):kube-apiserver" failed (failure): HTTP probe failed with statuscode: 500 Mar 18 15:13:51 kubeadm kubicd[1719]: time="2020-03-18T15:13:51Z" level=error msg="Error invoking kubeadm: exit status 1\nW0318 15:09:38.677827 15452 validation.go:28] Cannot validate kube-proxy config - no validator is available\nW0318 15:09:38.677907 15452 validation.go:28] Cannot validate kubelet config - no validator is available\nW0318 15:09:48.210563 15452 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"\nW0318 15:09:48.212596 15452 manifests.go:214] the default kube-apiserver authorization-mode is \"Node,RBAC\"; using \"Node,RBAC\"\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster\nTo see the stack trace of this error execute with --v=5 or higher\n" Mar 18 15:13:51 kubeadm kubicd[1719]: time="2020-03-18T15:13:51Z" level=info msg="Executing /usr/bin/kubeadm: [kubeadm reset --force]" During init phase I can see 4 containers are started , after some time they are restarted and at the end removed. CONTAINER IMAGE CREATED STATE NAME AT TEMPT POD ID a6b00704c77c6 c4dbbd93404efeff8a6e4786acbeacedb3d09b1bf853fad7c1ff9c3151059561 About a minute ago Running kube-controller-manager 0 290f739c0580e d496e5ff6a1e6 9886f51d5876dfa7a2a53ad25651d982b03f1573d64060facd41a743db95e40d About a minute ago Running kube-apiserver 0 59a02a8e7dd1b a6c16a4754e7a 7a43dd7408a648c68d90c9439ef6e81a46202ae83e8405adbbc7029f69f40d1d About a minute ago Running etcd 0 85371e0c09e4e e151c1729e999 b8aa11f465e53db4c21f519fed6d610d015aae2dae6266c4ed5f9cd427144255 About a minute ago Running kube-scheduler 0 a2e24ab789161 Reproducible: Always Steps to Reproduce: 1. Fresh install of Kubic(Master role) & transactional-update 2. kubeadm:~/.config/kubicctl # ls admin.crt admin.key Kubic-Control-CA.crt user.crt user.key 3. kubicctl init Actual Results: kubeadm:~/.config/kubicctl # kubicctl init Initializing kubernetes master can take several minutes, please be patient. Setting up single-master kubernetes node with weave Initialize Kubernetes control-plane Error invoking kubeadm: exit status 1 kubeadm:~/.config/kubicctl # Expected Results: kubicctl should finish properly -- You are receiving this mail because: You are the assignee for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1166999 http://bugzilla.opensuse.org/show_bug.cgi?id=1166999#c1 Ladislav Mate <ladislav.mate@gmail.com> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |RESOLVED Resolution|--- |INVALID --- Comment #1 from Ladislav Mate <ladislav.mate@gmail.com> --- 'kubeadm init' failed as well. After some more log checks found that etcd complain for long response times. etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (1.786507133s) to execute Moved VMs to faster disks and everything is working great. kubeadm:~ # kubicctl init Initializing kubernetes master can take several minutes, please be patient. Setting up single-master kubernetes node with weave Initialize Kubernetes control-plane Deploy weave Deploy Kubernetes Reboot Daemon (kured) Kubernetes master was succesfully setup. kubeadm:~ # -- You are receiving this mail because: You are the assignee for the bug.
participants (1)
-
bugzilla_noreply@novell.com