Re: Kubic 20220320 pods CrashLoopBackOff
On Wed, Mar 23, Robert Munteanu wrote:
I have the same problem with kubic cluster:
$ kubectl -n kube-system logs weave-net-8mmmc -c weave-init modprobe: can't load module nfnetlink (kernel/net/netfilter/nfnetlink.ko.zst): invalid module format Ignore the error if "xt_set" is built-in in the kernel
Works fine for me (but it's not running kubernetes): microos:~ # lsmod |grep nfnetlink nfnetlink 20480 0 microos:~ # cat /etc/os-release NAME="openSUSE MicroOS" # VERSION="20220321" Could it be that your disk is full or the nfnetlink.ko.zst corrupted in any other way? What das "rpm -V kernel-default" say? Thorsten -- Thorsten Kukuk, Distinguished Engineer, Senior Architect SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
Hi Thorsten, On Wed, 2022-03-23 at 11:42 +0100, Thorsten Kukuk wrote:
On Wed, Mar 23, Robert Munteanu wrote:
I have the same problem with kubic cluster:
$ kubectl -n kube-system logs weave-net-8mmmc -c weave-init modprobe: can't load module nfnetlink (kernel/net/netfilter/nfnetlink.ko.zst): invalid module format Ignore the error if "xt_set" is built-in in the kernel
Works fine for me (but it's not running kubernetes): microos:~ # lsmod |grep nfnetlink nfnetlink 20480 0 microos:~ # cat /etc/os-release NAME="openSUSE MicroOS" # VERSION="20220321"
Could it be that your disk is full or the nfnetlink.ko.zst corrupted in any other way? What das "rpm -V kernel-default" say?
I already rolled back following Richard's ask to test older snapshots and the problem is (temporarily) solved. FWIW, the old kernel version was 5.16.14 and the new one was 5.16.15 . I don't think free space is an issue, all my kubic VMs have at least 5 GB available on the root partition after a rollback and 8 GB on /var. I don't think the snapshot rollback freed up that much space. Also, the problem affected all 3 VMs, no pods would get an IP address, so I'm not sure this was a disk space problem. If you think this is useful, I will try and get another VM rolled forward to the latest kernel version and check the free space issue and the kernel RPM integrity. Thanks, Robert
On Wed, 2022-03-23 at 11:54 +0100, Robert Munteanu wrote:
Hi Thorsten,
On Wed, 2022-03-23 at 11:42 +0100, Thorsten Kukuk wrote:
On Wed, Mar 23, Robert Munteanu wrote:
I have the same problem with kubic cluster:
$ kubectl -n kube-system logs weave-net-8mmmc -c weave-init modprobe: can't load module nfnetlink (kernel/net/netfilter/nfnetlink.ko.zst): invalid module format Ignore the error if "xt_set" is built-in in the kernel
Works fine for me (but it's not running kubernetes): microos:~ # lsmod |grep nfnetlink nfnetlink 20480 0 microos:~ # cat /etc/os-release NAME="openSUSE MicroOS" # VERSION="20220321"
Could it be that your disk is full or the nfnetlink.ko.zst corrupted in any other way? What das "rpm -V kernel-default" say?
I already rolled back following Richard's ask to test older snapshots and the problem is (temporarily) solved.
FWIW, the old kernel version was 5.16.14 and the new one was 5.16.15 .
I don't think free space is an issue, all my kubic VMs have at least 5 GB available on the root partition after a rollback and 8 GB on /var. I don't think the snapshot rollback freed up that much space.
Also, the problem affected all 3 VMs, no pods would get an IP address, so I'm not sure this was a disk space problem.
If you think this is useful, I will try and get another VM rolled forward to the latest kernel version and check the free space issue and the kernel RPM integrity.
Here's the update: $ rpm -q kernel-default kernel-default-5.16.15-1.1.x86_6 $ rpm -V kernel-default (no errors, that is) $ modprobe nfnetlink $ lsmod | grep nfnetlink nfnetlink 20480 0 The problem now is that, somehow, the rollback meant that the static manifests in /etc/kubernetes/manifests are pinned to v1.23.0, which are gone, so the the API server is down Mar 24 08:32:10 kubic-master-1 kubelet[1224]: E0324 08:32:10.629167 1224 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.opensuse.org/kubic/kube- apiserver:v1.23.0\\\"\"" pod="kube-system/kube-apiserver-kubic-master- 1" podUID=6498bfe4d6f53138be78d065788b23e4 I'm going to try and patch them in-place to point to 1.23.4, which is what I assume is available right now. However, container images disappearing from the registry really makes it hard to troubleshoot/rollback. I would suggest that the Kubic project makes a statement on the installation pages regarding the availability of such container images. Thanks, Robert
Thanks, Robert
------- Original Message ------- On Thursday, March 24th, 2022 at 3:37 PM, Robert Munteanu <rombert@apache.org> wrote:
On Wed, 2022-03-23 at 11:54 +0100, Robert Munteanu wrote:
Hi Thorsten,
On Wed, 2022-03-23 at 11:42 +0100, Thorsten Kukuk wrote:
On Wed, Mar 23, Robert Munteanu wrote:
I have the same problem with kubic cluster:
$ kubectl -n kube-system logs weave-net-8mmmc -c weave-init
modprobe: can't load module nfnetlink
(kernel/net/netfilter/nfnetlink.ko.zst): invalid module format
Ignore the error if "xt_set" is built-in in the kernel
Works fine for me (but it's not running kubernetes):
microos:~ # lsmod |grep nfnetlink
nfnetlink 20480 0
microos:~ # cat /etc/os-release
NAME="openSUSE MicroOS"
# VERSION="20220321"
Could it be that your disk is full or the nfnetlink.ko.zst
corrupted
in
any other way?
What das "rpm -V kernel-default" say?
I already rolled back following Richard's ask to test older snapshots
and the problem is (temporarily) solved.
FWIW, the old kernel version was 5.16.14 and the new one was 5.16.15
.
I don't think free space is an issue, all my kubic VMs have at least
5
GB available on the root partition after a rollback and 8 GB on /var.
I
don't think the snapshot rollback freed up that much space.
Also, the problem affected all 3 VMs, no pods would get an IP
address,
so I'm not sure this was a disk space problem.
If you think this is useful, I will try and get another VM rolled
forward to the latest kernel version and check the free space issue
and
the kernel RPM integrity.
I see no obvious issues on the latest Kubic image. kubicmaster:~ # rpm -V kernel-default kubicmaster:~ # uname -a Linux kubicmaster 5.16.15-1-default #1 SMP PREEMPT Wed Mar 16 23:33:05 UTC 2022 (d8f0e40) x86_64 x86_64 x86_64 GNU/Linux kubicmaster:~ # lsmod | grep -i nfnetlink nfnetlink_log 20480 2 nfnetlink 20480 3 ip_set,nfnetlink_log
I would suggest that the Kubic project makes a statement on the
installation pages regarding the availability of such container images.
We've been there before, so to avoid issues we make a copy of the kube-system images at $dayjob that now holds the 1.23.0 images: https://registry.antavo.com/kubic Also, wondering if we should open a bug report on this issue instead of running the thread here? -- Br, A.
On 24.03.22 at 11:59 Attila Pinter wrote:
On Thursday, March 24th, 2022 at 3:37 PM, Robert Munteanu <rombert@apache.org> wrote:
I would suggest that the Kubic project makes a statement on the installation pages regarding the availability of such container images.
We've been there before, so to avoid issues we make a copy of the kube-system images at $dayjob that now holds the 1.23.0 images: https://registry.antavo.com/kubic
Also, wondering if we should open a bug report on this issue instead of running the thread here?
I would propose to do that, otherwise it might get lost. And to be honest, why can't those image be kept? Are they really using that much space? Kind Regards, Johannes -- Johannes Kastl Linux Consultant & Trainer Tel.: +49 (0) 151 2372 5802 Mail: kastl@b1-systems.de B1 Systems GmbH Osterfeldstraße 7 / 85088 Vohburg http://www.b1-systems.de GF: Ralph Dehner Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537
On Thu, 2022-03-24 at 15:13 +0100, Johannes Kastl wrote:
Also, wondering if we should open a bug report on this issue instead of running the thread here?
I would propose to do that, otherwise it might get los
I filed an issue for the main problem - Weave CNI not working. https://bugzilla.opensuse.org/show_bug.cgi?id=1197490 It pretty much fails out of the box on a libvirt VM for me, following the kubeadm guide. Hope this helps, Robert
On Thu, 2022-03-24 at 15:13 +0100, Johannes Kastl wrote:
I would suggest that the Kubic project makes a statement on the installation pages regarding the availability of such container images.
We've been there before, so to avoid issues we make a copy of the kube-system images at $dayjob that now holds the 1.23.0 images: https://registry.antavo.com/kubic
Also, wondering if we should open a bug report on this issue instead of running the thread here?
I would propose to do that, otherwise it might get lost.
And to be honest, why can't those image be kept? Are they really using that much space?
And for the container images going away in particular https://bugzilla.opensuse.org/show_bug.cgi?id=1197495 Thanks, Robert
------- Original Message ------- On Friday, March 25th, 2022 at 4:14 AM, Robert Munteanu <rombert@apache.org> wrote:
And for the container images going away in particular
Not sure if I would agree with this. Expecting the Kubic team to maintain multiple versions of k8s indefinitely is not something I would expect from anyone thus I push the necessary images to our own registry to make sure I don't lose any. I do agree that we would need some sort of notice to users on en-o-o to raise attention to this matter though. I see the frustration in a situation as such, but if you need 1.23.0 - like we do right now - feel free to use the registry I shared earlier. Tags and names are all the same. -- Br, A.
On Fri, 2022-03-25 at 05:53 +0000, Attila Pinter wrote:
------- Original Message -------
On Friday, March 25th, 2022 at 4:14 AM, Robert Munteanu <rombert@apache.org> wrote:
And for the container images going away in particular
Not sure if I would agree with this. Expecting the Kubic team to maintain multiple versions of k8s indefinitely is not something I would expect from anyone thus I push the necessary images to our own registry to make sure I don't lose any. I do agree that we would need some sort of notice to users on en-o-o to raise attention to this matter though.
I see the frustration in a situation as such, but if you need 1.23.0 - like we do right now - feel free to use the registry I shared earlier. Tags and names are all the same.
My point of view was not about support. Simply having the container images available does not create an expectation that e.g. v1.23.0 is supported when v1.23.4 is available. It is about enabling rollbacks and giving breathing room for performing updates. For me the most important action would be to see an official statement from the kubic team posted on the installation pages that tells us how long we should expect the images to be available. If they decide they will keep the same policy _and_ document it that's a good outcome. Then I will know to mirror all of registry.opensuse.org/kubic and be done with it. If they decide to keep the images forever, that's even better for me, I will have nothing to do. Thanks, Robert
On Thu, 2022-03-24 at 09:37 +0100, Robert Munteanu wrote:
On Wed, 2022-03-23 at 11:54 +0100, Robert Munteanu wrote:
Hi Thorsten,
On Wed, 2022-03-23 at 11:42 +0100, Thorsten Kukuk wrote:
On Wed, Mar 23, Robert Munteanu wrote:
I have the same problem with kubic cluster:
$ kubectl -n kube-system logs weave-net-8mmmc -c weave-init modprobe: can't load module nfnetlink (kernel/net/netfilter/nfnetlink.ko.zst): invalid module format Ignore the error if "xt_set" is built-in in the kernel
Works fine for me (but it's not running kubernetes): microos:~ # lsmod |grep nfnetlink nfnetlink 20480 0 microos:~ # cat /etc/os-release NAME="openSUSE MicroOS" # VERSION="20220321"
Could it be that your disk is full or the nfnetlink.ko.zst corrupted in any other way? What das "rpm -V kernel-default" say?
I already rolled back following Richard's ask to test older snapshots and the problem is (temporarily) solved.
FWIW, the old kernel version was 5.16.14 and the new one was 5.16.15 .
I don't think free space is an issue, all my kubic VMs have at least 5 GB available on the root partition after a rollback and 8 GB on /var. I don't think the snapshot rollback freed up that much space.
Also, the problem affected all 3 VMs, no pods would get an IP address, so I'm not sure this was a disk space problem.
If you think this is useful, I will try and get another VM rolled forward to the latest kernel version and check the free space issue and the kernel RPM integrity.
Here's the update:
$ rpm -q kernel-default kernel-default-5.16.15-1.1.x86_6
$ rpm -V kernel-default
(no errors, that is)
$ modprobe nfnetlink $ lsmod | grep nfnetlink nfnetlink 20480 0
The problem now is that, somehow, the rollback meant that the static manifests in /etc/kubernetes/manifests are pinned to v1.23.0, which are gone, so the the API server is down
Mar 24 08:32:10 kubic-master-1 kubelet[1224]: E0324 08:32:10.629167 1224 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.opensuse.org/kubic/kube- apiserver:v1.23.0\\\"\"" pod="kube-system/kube-apiserver-kubic- master- 1" podUID=6498bfe4d6f53138be78d065788b23e4
I'm going to try and patch them in-place to point to 1.23.4, which is what I assume is available right now. However, container images disappearing from the registry really makes it hard to troubleshoot/rollback.
This gets even more fun. Once I manually modprobe nfnetlink I see the following in the weave-init container logs modprobe: can't load module ip_set (kernel/net/netfilter/ipset/ip_set.ko.zst): invalid module format Ignore the error if "xt_set" is built-in in the kernel I resorted to manually loading the ip_set module as well, and after that the init container did not complain anymore. However, this still did not resolve the problem. Only 2/3 weave-net pods are up now $ kubectl get pod -l name=weave-net NAME READY STATUS RESTARTS AGE weave-net-6db9t 2/2 Running 0 117m weave-net-96vnw 1/2 Running 0 118m weave-net-xxrtp 2/2 Running 0 118m The only thing that stands out for the not-ready weave pod is a large number of 'Vetoed installation of hairpin flow...' messages. $ kubectl logs weave-net-6db9t -c weave | grep -c 'Vetoed installation of hairpin' 30 $ kubectl logs weave-net-96vnw -c weave | grep -c 'Vetoed installation of hairpin' 579 $ kubectl logs weave-net-xxrtp -c weave | grep -c 'Vetoed installation of hairpin' 6 If I try to get the weave status, all pods report something similar to $ kubectl exec weave-net-xxrtp -c weave -- /home/weave/weave --local status Version: 2.8.1 (failed to check latest version - see logs; next check at 2022/03/24 15:14:54) Service: router Protocol: weave 1..2 Name: b2:4c:a5:c8:b2:89(kubic-master-1) Encryption: disabled PeerDiscovery: enabled Targets: 2 Connections: 2 (2 established) Peers: 3 (with 6 established connections) TrustedSubnets: none Service: ipam Status: ready Range: 10.32.0.0/12 DefaultSubnet: 10.32.0.0/12 Using kubectl --all-namespaces -o wide shows that weave has not allocated any IP addresses to any pods, only the ones that use the host network have them. Another piece of information: running `watch ip addr` reveals a huge churn in ip addresses, I see a couple of virtual addresses coming and going for every refresh cycle. I checked the systemd logs for a certain pod, but nothing stand out to me: Mar 24 11:22:17 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:17.555204679Z" level=info msg="Got pod network &{Name:prometheus-pushgateway-8655bf87b9-7s8z6 Namespace:lmn-system ID:d6a47c65d1bfe28aefe9f3b43c7653fff5546e14f67dfd409156a23b7e20e732 UID:94debd66-9a71-4822-8439-0d9cb6edc00f NetNS:/var/run/netns/0a17ec7c-6a80-4c87-ad07-474e65f1c1df Networks:[] RuntimeConfig:map[weave:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}" Mar 24 11:22:17 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:17.555636823Z" level=info msg="Checking pod lmn-system_prometheus-pushgateway-8655bf87b9-7s8z6 for CNI network weave (type=weave-net)" Mar 24 11:22:17 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:17.735153225Z" level=info msg="Ran pod sandbox d6a47c65d1bfe28aefe9f3b43c7653fff5546e14f67dfd409156a23b7e20e732 with infra container: lmn-system/prometheus-pushgateway-8655bf87b9-7s8z6/POD" id=366231ac-359a-4a98-abdd-b9bd968889d6 name=/runtime.v1.RuntimeService/RunPodSandbox Mar 24 11:22:17 kubic-worker-2 kubelet[1235]: E0324 11:22:17.736658 1235 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-pushgateway\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=prometheus-pushgateway pod=prometheus-pushgateway-8655bf87b9-7s8z6_lmn-system(94debd66-9a71-4822-8439-0d9cb6edc00f)\"" pod="lmn-system/prometheus-pushgateway-8655bf87b9-7s8z6" podUID=94debd66-9a71-4822-8439-0d9cb6edc00f Mar 24 11:22:18 kubic-worker-2 kubelet[1235]: I0324 11:22:18.473417 1235 kuberuntime_manager.go:517] "Sandbox for pod has no IP address. Need to start a new one" pod="lmn-system/prometheus-pushgateway-8655bf87b9-7s8z6" Mar 24 11:22:18 kubic-worker-2 kubelet[1235]: I0324 11:22:18.473727 1235 kubelet.go:2101] "SyncLoop (PLEG): event for pod" pod="lmn-system/prometheus-pushgateway-8655bf87b9-7s8z6" event=&{ID:94debd66-9a71-4822-8439-0d9cb6edc00f Type:ContainerStarted Data:d6a47c65d1bfe28aefe9f3b43c7653fff5546e14f67dfd409156a23b7e20e732} Mar 24 11:22:18 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:18.505034449Z" level=info msg="Got pod network &{Name:prometheus-pushgateway-8655bf87b9-7s8z6 Namespace:lmn-system ID:d6a47c65d1bfe28aefe9f3b43c7653fff5546e14f67dfd409156a23b7e20e732 UID:94debd66-9a71-4822-8439-0d9cb6edc00f NetNS:/var/run/netns/0a17ec7c-6a80-4c87-ad07-474e65f1c1df Networks:[{Name:weave Ifname:eth0}] RuntimeConfig:map[weave:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}" Mar 24 11:22:18 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:18.505271666Z" level=info msg="Deleting pod lmn-system_prometheus-pushgateway-8655bf87b9-7s8z6 from CNI network \"weave\" (type=weave-net)" Mar 24 11:22:25 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:25.860145921Z" level=info msg="Running pod sandbox: lmn-system/prometheus-pushgateway-8655bf87b9-7s8z6/POD" id=c238805b-4797-4c57-ad72-a74caf002bfa name=/runtime.v1.RuntimeService/RunPodSandbox Mar 24 11:22:26 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:26.097940633Z" level=info msg="Got pod network &{Name:prometheus-pushgateway-8655bf87b9-7s8z6 Namespace:lmn-system ID:b5b362d92151bde559115c2dc5bb01bceef216434b77a4b7d6a7d72e6439e9a4 UID:94debd66-9a71-4822-8439-0d9cb6edc00f NetNS:/var/run/netns/374582ee-18ec-438b-a813-ff8ad4a44ea6 Networks:[] RuntimeConfig:map[weave:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}" Mar 24 11:22:26 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:26.098189993Z" level=info msg="Adding pod lmn-system_prometheus-pushgateway-8655bf87b9-7s8z6 to CNI network \"weave\" (type=weave-net)" Mar 24 11:22:26 kubic-worker-2 kubelet[1235]: I0324 11:22:26.611351 1235 kubelet.go:2101] "SyncLoop (PLEG): event for pod" pod="lmn-system/prometheus-pushgateway-8655bf87b9-7s8z6" event=&{ID:94debd66-9a71-4822-8439-0d9cb6edc00f Type:ContainerDied Data:d6a47c65d1bfe28aefe9f3b43c7653fff5546e14f67dfd409156a23b7e20e732} Mar 24 11:22:34 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:34.585356194Z" level=info msg="Got pod network &{Name:prometheus-pushgateway-8655bf87b9-7s8z6 Namespace:lmn-system ID:b5b362d92151bde559115c2dc5bb01bceef216434b77a4b7d6a7d72e6439e9a4 UID:94debd66-9a71-4822-8439-0d9cb6edc00f NetNS:/var/run/netns/374582ee-18ec-438b-a813-ff8ad4a44ea6 Networks:[] RuntimeConfig:map[weave:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}" Mar 24 11:22:34 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:34.585713497Z" level=info msg="Checking pod lmn-system_prometheus-pushgateway-8655bf87b9-7s8z6 for CNI network weave (type=weave-net)" Mar 24 11:22:34 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:34.767330381Z" level=info msg="Ran pod sandbox b5b362d92151bde559115c2dc5bb01bceef216434b77a4b7d6a7d72e6439e9a4 with infra container: lmn-system/prometheus-pushgateway-8655bf87b9-7s8z6/POD" id=c238805b-4797-4c57-ad72-a74caf002bfa name=/runtime.v1.RuntimeService/RunPodSandbox Mar 24 11:22:34 kubic-worker-2 kubelet[1235]: E0324 11:22:34.769600 1235 pod_workers.go:919] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-pushgateway\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=prometheus-pushgateway pod=prometheus-pushgateway-8655bf87b9-7s8z6_lmn-system(94debd66-9a71-4822-8439-0d9cb6edc00f)\"" pod="lmn-system/prometheus-pushgateway-8655bf87b9-7s8z6" podUID=94debd66-9a71-4822-8439-0d9cb6edc00f Mar 24 11:22:35 kubic-worker-2 kubelet[1235]: I0324 11:22:35.744818 1235 kubelet.go:2101] "SyncLoop (PLEG): event for pod" pod="lmn-system/prometheus-pushgateway-8655bf87b9-7s8z6" event=&{ID:94debd66-9a71-4822-8439-0d9cb6edc00f Type:ContainerStarted Data:b5b362d92151bde559115c2dc5bb01bceef216434b77a4b7d6a7d72e6439e9a4} Mar 24 11:22:35 kubic-worker-2 kubelet[1235]: I0324 11:22:35.745477 1235 kuberuntime_manager.go:517] "Sandbox for pod has no IP address. Need to start a new one" pod="lmn-system/prometheus-pushgateway-8655bf87b9-7s8z6" Mar 24 11:22:35 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:35.753802361Z" level=info msg="Got pod network &{Name:prometheus-pushgateway-8655bf87b9-7s8z6 Namespace:lmn-system ID:b5b362d92151bde559115c2dc5bb01bceef216434b77a4b7d6a7d72e6439e9a4 UID:94debd66-9a71-4822-8439-0d9cb6edc00f NetNS:/var/run/netns/374582ee-18ec-438b-a813-ff8ad4a44ea6 Networks:[{Name:weave Ifname:eth0}] RuntimeConfig:map[weave:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}" Mar 24 11:22:35 kubic-worker-2 crio[1216]: time="2022-03-24 11:22:35.753897070Z" level=info msg="Deleting pod lmn-system_prometheus-pushgateway-8655bf87b9-7s8z6 from CNI network \"weave\" (type=weave-net)" I have no idea how to go on right now, any ideas would be appreciated. Thanks, Robert
participants (4)
-
Attila Pinter
-
Johannes Kastl
-
Robert Munteanu
-
Thorsten Kukuk