I noticed that one of the nodes in my kubic cluster is limping, with the root cause being a failing kube-proxy pod.
kube-system kube-proxy-p4ljp 0/1 ImagePullBackOff 0 97d 10.25.0.43 kubic- worker-1 <none> <none>
The root cause seems to be
Normal BackOff 19m (x11214 over 42h) kubelet Back-off pulling image "registry.opensuse.org/kubic/kube-proxy:v1.19.4"
The other nodes seem to work fine, and I assume they have the images in the kubelet cache.
All nodes are at version 1.19.7
$ kubectl get node NAME STATUS ROLES AGE VERSION kubic-master-1 Ready master 97d v1.19.7 kubic-worker-1 Ready <none> 97d v1.19.7 kubic-worker-2 Ready <none> 97d v1.19.7
I could edit the DaemonSet manually, but I like the kubic unattended updates too much and would like to understand the contract related to minor version updates.
Should I file a bug or is this expected behaviour?