Hi all, On 22.03.22 at 12:42 Attila Pinter wrote:
Ran into a bit of a pickle with Kubic yesterday after the last update. The cluster was running with 1.23.4 since January without issues, but after an update I started to noticed that coredns and kured crashlooping[1]. Rolling back did not solve this issue. Seemingly the pods can't get an address for some reason judging by the log that I also made available[2].
Anyone using k3s on openSUSE MicroOS? I noticed problems with coredns the day before yesterday on some of my singlenode k3s clusters. Two x86 machines and a Raspi4 are showing lots of pods in CrashLoopBackOff, while another singlenode x86 and my 3-node x86 cluster is running fine. All on the same kernel version (5.16.15-1-default) and k3s version (1.22.6) with Cilium 1.11.2. Errors from the coredns pods indicate that it cannot talk to outside DNS servers anymore due to timeouts. The hosts themselves are working fine as far as I can see, no errors. [...]
CoreDNS-1.8.6 linux/arm64, go1.17.1, 13a9191 [ERROR] plugin/errors: 2 2710547001195683759.7881225443334916626. HINFO: read udp 10.0.0.190:35403->192.168.99.1:53: i/o timeout [ERROR] plugin/errors: 2 2710547001195683759.7881225443334916626. HINFO: read udp 10.0.0.190:35711->192.168.99.121:53: i/o timeout [...]
As far as I understood Attila's and Robert's problem (Kubic 20220320 pods CrashLoopBackOff) was caused by weave not working properly, so I do not think this is related. However, wanted to report here in case anyone is experiencing similar issues... Kind Regards, Johannes -- Johannes Kastl Linux Consultant & Trainer Tel.: +49 (0) 151 2372 5802 Mail: kastl@b1-systems.de B1 Systems GmbH Osterfeldstraße 7 / 85088 Vohburg http://www.b1-systems.de GF: Ralph Dehner Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537