On Wednesday, January 26th, 2022 at 5:13 PM, Richard Brown <rbrown@suse.de> wrote:
On Wed, 2022-01-26 at 09:24 +0000, Attila Pinter wrote:
On Tuesday, January 25th, 2022 at 6:41 PM, Attila Pinter
adathor@protonmail.com wrote:
On Tuesday, January 25th, 2022 at 5:41 PM, Thorsten Kukuk
kukuk@suse.de wrote:
On Tue, Jan 25, Attila Pinter wrote:
A followup question: Is it possible to downgrade nodes with
kubicctl? Got a pretty naughty situation on my hand:
dev-k8s-master-1 Ready control-plane,master 46d v1.22.4
dev-k8s-worker-01 Ready <none> 44m v1.23.0
dev-k8s-worker-02 Ready <none> 43m v1.23.0
dev-k8s-worker-03 Ready <none> 45d v1.22.4
dev-k8s-worker-04 Ready <none> 45d v1.22.4
dev-k8s-worker-05 Ready <none> 45d v1.22.4
How could I get the 2 worker nodes from 1.23 to 1.22.4?
Sorry, couldn't find relevant documentation on the subject.
I'm not sure how well k8s supports downgrade, but if you have to
use
kubeadm, kubicctl does not support this.
Better migrate everything to k8s 1.23.
Thorsten
Thorsten Kukuk, Distinguished Engineer, Senior Architect
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409
Nuernberg, Germany
Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
Sorry, just noticed that Proton messed up the title somehow...
Anyhow, yes this is a less than ideal state. I think kubicctl could
get away with the upgrade. Checked the plan with kubadm first,
seems ok, but when I try the update it fails. Kubeadm to the rescue
for a more verbose output and got something weird:
[ERROR ImagePull]: failed to pull image
registry.opensuse.org/kubic/coredns/coredns:v1.8.6: output:
time="2022-01-25T11:21:48Z" level=fatal msg="pulling image: rpc
error: code = Unknown desc = reading manifest v1.8.6 in
registry.opensuse.org/kubic/coredns/coredns: name unknown"
Which is of course wrong, but where did it get the image is beyond
me. Little more debug and hopefully I will be able to stabilize it.
Thank you again for the help!
Br,
A.
I'm stuck with this. When I list the images with kubeadm the uri for
coredns is incorrect for some reason.
This is how it should look like:
kubeadm config images list
registry.opensuse.org/kubic/kube-apiserver:v1.23.0
registry.opensuse.org/kubic/kube-controller-manager:v1.23.0
registry.opensuse.org/kubic/kube-scheduler:v1.23.0
registry.opensuse.org/kubic/kube-proxy:v1.23.0
registry.opensuse.org/kubic/pause:3.6
registry.opensuse.org/kubic/etcd:3.5.1-0
registry.opensuse.org/kubic/coredns:v1.8.6
This is how it looks like:
kubeadm config images list
registry.opensuse.org/kubic/kube-apiserver:v1.23.0
registry.opensuse.org/kubic/kube-controller-manager:v1.23.0
registry.opensuse.org/kubic/kube-scheduler:v1.23.0
registry.opensuse.org/kubic/kube-proxy:v1.23.0
registry.opensuse.org/kubic/pause:3.6
registry.opensuse.org/kubic/etcd:3.5.1-0
registry.opensuse.org/kubic/coredns/coredns:v1.8.6 << This is not
good :/
Is there a way to manually change the image registry for kubeadm?
Can't seem to find the relevant config file for it.
Hi Atilla,
Are you sure you're running OUR kubeadm binary?
We have a patch which corrects the behaviour you're reporting:
https://build.opensuse.org/package/view_file/devel:kubic/kubernetes1.23/reve...
This patch appears to be applied truely to our kubernetes1.23 binaries,
so either you must be running something else or we've stumbled on a
very weird bug where kubeadm isn't using it's own function for locating
it's own coredns container...
Any light you can shine on it would be great.
Hi Richard, Ok, so little back story: I've put this Kubic cluster together when kubicctl still had a little issue, but was already in Devel:Kubic so added that repo. That was about 45 days ago. I'va naturally forgot about it. Noticed that the control plane couldn't update for a while automatically with t-u so switched back to the official repo yesterday and did a dup which seemingly fixed everything. I also added another 2 workers to it which landed with 1.23 into the 1.22 cluster, which is fine. Following Thorsten's advice I rather move forward than backward, but now I get this issue. I've ran a grep -Ri coredns / the whole night, but came up with nothing that would indicate that this issue is coming from one of the local configs are causing this. Or I just missed something which is also absolutely possible. Anyhow I just did a zypper pa after reading your mail and I think something is still not sitting right: i | devel:kubic (openSUSE_Tumbleweed) | cri-o-kubeadm-criconfig | 1.22.0-1.28 | x86_64 v | openSUSE-Tumbleweed-Oss | cri-o-kubeadm-criconfig | 1.22.0-1.4 | x86_64 | openSUSE-Tumbleweed-Oss | docker-kubic-kubeadm-criconfig | 20.10.12_ce-2.1 | x86_64 i | devel:kubic (openSUSE_Tumbleweed) | kubernetes-kubeadm | 1.23.0-32.1 | x86_64 v | openSUSE-Tumbleweed-Oss | kubernetes-kubeadm | 1.23.0-22.2 | x86_64 | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.18-kubeadm | 1.18.20-61.38 | x86_64 | openSUSE-Tumbleweed-Oss | kubernetes1.18-kubeadm | 1.18.20-1.4 | x86_64 | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.19-kubeadm | 1.19.15-38.16 | x86_64 | openSUSE-Tumbleweed-Oss | kubernetes1.19-kubeadm | 1.19.15-2.4 | x86_64 | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.20-kubeadm | 1.20.13-31.1 | x86_64 | openSUSE-Tumbleweed-Oss | kubernetes1.20-kubeadm | 1.20.13-1.3 | x86_64 | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.21-kubeadm | 1.21.7-34.1 | x86_64 | openSUSE-Tumbleweed-Oss | kubernetes1.21-kubeadm | 1.21.7-1.3 | x86_64 | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.22-kubeadm | 1.22.4-9.1 | x86_64 | openSUSE-Tumbleweed-Oss | kubernetes1.22-kubeadm | 1.22.4-2.3 | x86_64 i | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.23-kubeadm | 1.23.0-5.1 | x86_64 v | openSUSE-Tumbleweed-Oss | kubernetes1.23-kubeadm | 1.23.0-2.1 | x86_64 i | devel:kubic (openSUSE_Tumbleweed) | patterns-containers-kubeadm | 5.0-77.10 | x86_64 v | openSUSE-Tumbleweed-Oss | patterns-containers-kubeadm | 5.0-25.2 | x86_64 Will get to fixing this and let's see how it goes. Do let me know if I can provide any further information. -- Br, A.