On Tuesday, January 25th, 2022 at 5:41 PM, Thorsten Kukuk <firstname.lastname@example.org> wrote:
On Tue, Jan 25, Attila Pinter wrote:
A followup question: Is it possible to downgrade nodes with kubicctl? Got a pretty naughty situation on my hand:
dev-k8s-master-1 Ready control-plane,master 46d v1.22.4
dev-k8s-worker-01 Ready <none> 44m v1.23.0
dev-k8s-worker-02 Ready <none> 43m v1.23.0
dev-k8s-worker-03 Ready <none> 45d v1.22.4
dev-k8s-worker-04 Ready <none> 45d v1.22.4
dev-k8s-worker-05 Ready <none> 45d v1.22.4
How could I get the 2 worker nodes from 1.23 to 1.22.4?
Sorry, couldn't find relevant documentation on the subject.
I'm not sure how well k8s supports downgrade, but if you have to use
kubeadm, kubicctl does not support this.
Better migrate everything to k8s 1.23.
Thorsten Kukuk, Distinguished Engineer, Senior Architect
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany
Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
Sorry, just noticed that Proton messed up the title somehow... Anyhow, yes this is a less than ideal state. I think kubicctl could get away with the upgrade. Checked the plan with kubadm first, seems ok, but when I try the update it fails. Kubeadm to the rescue for a more verbose output and got something weird: [ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/coredns/coredns:v1.8.6: output: time="2022-01-25T11:21:48Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = reading manifest v1.8.6 in registry.opensuse.org/kubic/coredns/coredns: name unknown" Which is of course wrong, but where did it get the image is beyond me. Little more debug and hopefully I will be able to stabilize it. Thank you again for the help! -- Br, A.