On Tuesday, January 25th, 2022 at 2:19 PM, Thorsten Kukuk <kukuk@suse.de> wrote:
Hi,
On Tue, Jan 25, Attila Pinter wrote:
Hi,
Ran into an interesting "issue" (quotes, because I'm not sure what to expect and absolutely possible that I'm pulling an armature hour type deal) with Kubic.
After starting my test cluster at home I noticed that `kured` pods are on ImagePullBackOff and checking registry-o-o the requested version (1.6.1) in the manifest is missing for amd64. My question: - updates of pods should happen via transactional-updates (as it seems to be a package in /usr/share/k8s-yaml/ that holds these manifests)
transactional-update only updates packages, no kubernetes workload. It
cannot do that, as it has no credentials to do so and even if, would not
know anything about the cluster to do it.
Beside automatic update of a k8s cluster would be a really bad idea,
since the tools have no clue about your workload and which changes they
need to work with the new k8s version.
- or kubicctl should do this (I think kubicctl should only handle the k8s upgrades in the cluster with `salt`, I can be wrong)
kubicctl will upgrade your k8s cluster, but you have to plan, schedule
and do it yourself. See above, it's for this tools not possible to do it
automatically due to the fast move and big changes of new k8s versions.
So from time to time plan to run kubicctl to upgrade your k8s cluster.
- or it is the users responsibility to keep the pods up-to-date in their preferred manner?
Correct.
Thorsten
Have the feeling that the last option is the winner, but would feel a lot better if someone would clarify this.
--
Br,
A.
Thorsten Kukuk, Distinguished Engineer, Senior Architect
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany
Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
A followup question: Is it possible to downgrade nodes with kubicctl? Got a pretty naughty situation on my hand: dev-k8s-master-1 Ready control-plane,master 46d v1.22.4 dev-k8s-worker-01 Ready <none> 44m v1.23.0 dev-k8s-worker-02 Ready <none> 43m v1.23.0 dev-k8s-worker-03 Ready <none> 45d v1.22.4 dev-k8s-worker-04 Ready <none> 45d v1.22.4 dev-k8s-worker-05 Ready <none> 45d v1.22.4 How could I get the 2 worker nodes from 1.23 to 1.22.4? Sorry, couldn't find relevant documentation on the subject. -- Br, A.