http://bugzilla.opensuse.org/show_bug.cgi?id=1093132
http://bugzilla.opensuse.org/show_bug.cgi?id=1093132#c12
--- Comment #12 from Panagiotis Georgiadis
@Panos: Can you try `kubeadm init --ignore-preflight-errors=cri` to ignore the "dockershim.sock running" error? I have no idea why the error log occurs (as dockershim is running), but with ignoring it I come as far as on a CaaSP node (i.e., to the master marking timeout).
Given that I have manually stopped kubelet before (otherwise I run into this 'port is used' message), then the CRI ERROR turns into a simple warning, thus the process is not exited.
[WARNING CRI]: unable to check if the container runtime at "/var/run/dockershim.sock" is running: exit status 1
However, issuing 'kubeadm reset' fails to remove the containers using 'crictl', but fortunately it falls back to docker, thus the 'reset' functionality works.:
[reset] Cleaning up running containers using crictl with socket /var/run/dockershim.sock [reset] Failed to stop the running containers using crictl. Trying using docker instead.
The rest of the log:
[preflight] Starting the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [ultron kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.160.5.162] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 18.002374 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node ultron as master by adding a label and a taint error marking master: timed out waiting for the condition
Looking at the logs of kubelet:
hyperkube[11237]: I0515 10:44:20.757152 11237 prober.go:111] Liveness probe for "kube-apiserver-ultron_kube-system(2493d5406e150ab99897ddd64627789c):kube-apiserver" failed (failure): HTTP probe failed with statuscode: 500
If you run the command using --dry-run=true you will find what kubeadm tries to apply:
[markmaster] Will mark node ultron.suse.de as master by adding a label and a taint [dryrun] Would perform action GET on resource "nodes" in API group "core/v1" [dryrun] Resource name: "ultron.suse.de" [dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1" [dryrun] Resource name: "ultron.suse.de" [dryrun] Attached patch:
{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}]}}
[markmaster] Master ultron.suse.de tainted and labelled with key/value: node-role.kubernetes.io/master=""
Also look at: [1] https://github.com/kubernetes/kops/issues/4390 [2] https://github.com/kubernetes/kubeadm/issues/584 -- You are receiving this mail because: You are on the CC list for the bug.