Bug ID 1197490
Summary Kubic 20220322 fails to set up CNI with weave
Classification openSUSE
Product openSUSE Tumbleweed
Version Current
Hardware x86-64
OS openSUSE Tumbleweed
Status NEW
Severity Critical
Priority P5 - None
Component Kubic
Assignee kubic-bugs@opensuse.org
Reporter rombert@apache.org
QA Contact qa-bugs@suse.de
Found By ---
Blocker ---

I have set up a set of libvirt VMs via
https://github.com/kubic-project/kubic-terraform-kvm. I have initialised the
control plane via kubeadm following the guide from
https://en.opensuse.org/Kubic:kubeadm:


kubeadm init --kubernetes-version=v1.23.4 # due to bug #1197489
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f /usr/share/k8s-yaml/weave/weave.yaml

At this point the pod network should be available, but instead I get

$ kubectl get pod -A -o wide 
NAMESPACE     NAME                              READY   STATUS            
RESTARTS        AGE     IP            NODE      NOMINATED NODE   READINESS
GATES
kube-system   coredns-fc8b57f45-kfsv6           0/1     CrashLoopBackOff   6
(88s ago)     5m37s   <none>        kubic-0   <none>           <none>
kube-system   coredns-fc8b57f45-nv4x9           0/1     CrashLoopBackOff   6
(81s ago)     5m37s   <none>        kubic-0   <none>           <none>
kube-system   etcd-kubic-0                      1/1     Running            0   
           5m34s   10.16.0.170   kubic-0   <none>           <none>
kube-system   kube-apiserver-kubic-0            1/1     Running            0   
           5m34s   10.16.0.170   kubic-0   <none>           <none>
kube-system   kube-controller-manager-kubic-0   1/1     Running            0   
           5m34s   10.16.0.170   kubic-0   <none>           <none>
kube-system   kube-proxy-hx8mx                  1/1     Running            0   
           5m38s   10.16.0.170   kubic-0   <none>           <none>
kube-system   kube-scheduler-kubic-0            1/1     Running            0   
           5m34s   10.16.0.170   kubic-0   <none>           <none>
kube-system   weave-net-2t4lz                   2/2     Running            1
(4m51s ago)   4m57s   10.16.0.170   kubic-0   <none>           <none>

This is presumably due to an error loading the modules

$ kubectl -n kube-system logs weave-net-2t4lz -c weave-init
modprobe: can't load module nfnetlink (kernel/net/netfilter/nfnetlink.ko.zst):
invalid module format
Ignore the error if "xt_set" is built-in in the kernel

I have loaded the modules required by weave and bumped the pods

$ modprobe nfnetlink
$ modprobe ip_set
$ kubectl -n kube-system rollout restart daemonset weave-net
daemonset.apps/weave-net restarted

This time there are no more logs in weave-init

$ kubectl -n kube-system logs weave-net-jks7b -c weave-init

The weave status seems ok

$ kubectl -n kube-system exec weave-net-jks7b -c weave -- /home/weave/weave
--local status

        Version: 2.8.1 (failed to check latest version - see logs; next check
at 2022/03/24 23:46:26)

        Service: router
       Protocol: weave 1..2
           Name: be:f7:8c:d7:49:12(kubic-0)
     Encryption: disabled
  PeerDiscovery: enabled
        Targets: 0
    Connections: 0
          Peers: 1
 TrustedSubnets: none

        Service: ipam
         Status: ready
          Range: 10.32.0.0/12
  DefaultSubnet: 10.32.0.0/12

I restarted the CoreDNS pods to make sure that they get a chance to be
restarted and allocated IPs.

$ kubectl -n kube-system rollout restart deployment coredns
deployment.apps/coredns restarted

After the restart they enter a CrashLoopBackOff rather quickly

kube-system   coredns-7d9d9d5f9f-7cz4w          0/1     CrashLoopBackOff   2
(2s ago)    21s
kube-system   coredns-7d9d9d5f9f-2cgwd          0/1     CrashLoopBackOff   2
(1s ago)    22s


You are receiving this mail because: