Re: Kubic updatek8s_init_microsite role update
On Tuesday, January 25th, 2022 at 2:19 PM, Thorsten Kukuk <kukuk@suse.de> wrote:
Hi,
On Tue, Jan 25, Attila Pinter wrote:
Hi,
Ran into an interesting "issue" (quotes, because I'm not sure what to expect and absolutely possible that I'm pulling an armature hour type deal) with Kubic.
After starting my test cluster at home I noticed that `kured` pods are on ImagePullBackOff and checking registry-o-o the requested version (1.6.1) in the manifest is missing for amd64. My question: - updates of pods should happen via transactional-updates (as it seems to be a package in /usr/share/k8s-yaml/ that holds these manifests)
transactional-update only updates packages, no kubernetes workload. It
cannot do that, as it has no credentials to do so and even if, would not
know anything about the cluster to do it.
Beside automatic update of a k8s cluster would be a really bad idea,
since the tools have no clue about your workload and which changes they
need to work with the new k8s version.
- or kubicctl should do this (I think kubicctl should only handle the k8s upgrades in the cluster with `salt`, I can be wrong)
kubicctl will upgrade your k8s cluster, but you have to plan, schedule
and do it yourself. See above, it's for this tools not possible to do it
automatically due to the fast move and big changes of new k8s versions.
So from time to time plan to run kubicctl to upgrade your k8s cluster.
- or it is the users responsibility to keep the pods up-to-date in their preferred manner?
Correct.
Thorsten
Have the feeling that the last option is the winner, but would feel a lot better if someone would clarify this.
--
Br,
A.
Thorsten Kukuk, Distinguished Engineer, Senior Architect
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany
Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
A followup question: Is it possible to downgrade nodes with kubicctl? Got a pretty naughty situation on my hand: dev-k8s-master-1 Ready control-plane,master 46d v1.22.4 dev-k8s-worker-01 Ready <none> 44m v1.23.0 dev-k8s-worker-02 Ready <none> 43m v1.23.0 dev-k8s-worker-03 Ready <none> 45d v1.22.4 dev-k8s-worker-04 Ready <none> 45d v1.22.4 dev-k8s-worker-05 Ready <none> 45d v1.22.4 How could I get the 2 worker nodes from 1.23 to 1.22.4? Sorry, couldn't find relevant documentation on the subject. -- Br, A.
On Tue, Jan 25, Attila Pinter wrote:
A followup question: Is it possible to downgrade nodes with kubicctl? Got a pretty naughty situation on my hand:
dev-k8s-master-1 Ready control-plane,master 46d v1.22.4 dev-k8s-worker-01 Ready <none> 44m v1.23.0 dev-k8s-worker-02 Ready <none> 43m v1.23.0 dev-k8s-worker-03 Ready <none> 45d v1.22.4 dev-k8s-worker-04 Ready <none> 45d v1.22.4 dev-k8s-worker-05 Ready <none> 45d v1.22.4
How could I get the 2 worker nodes from 1.23 to 1.22.4? Sorry, couldn't find relevant documentation on the subject.
I'm not sure how well k8s supports downgrade, but if you have to use kubeadm, kubicctl does not support this. Better migrate everything to k8s 1.23. Thorsten -- Thorsten Kukuk, Distinguished Engineer, Senior Architect SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
On Tuesday, January 25th, 2022 at 5:41 PM, Thorsten Kukuk <kukuk@suse.de> wrote:
On Tue, Jan 25, Attila Pinter wrote:
A followup question: Is it possible to downgrade nodes with kubicctl? Got a pretty naughty situation on my hand:
dev-k8s-master-1 Ready control-plane,master 46d v1.22.4
dev-k8s-worker-01 Ready <none> 44m v1.23.0
dev-k8s-worker-02 Ready <none> 43m v1.23.0
dev-k8s-worker-03 Ready <none> 45d v1.22.4
dev-k8s-worker-04 Ready <none> 45d v1.22.4
dev-k8s-worker-05 Ready <none> 45d v1.22.4
How could I get the 2 worker nodes from 1.23 to 1.22.4?
Sorry, couldn't find relevant documentation on the subject.
I'm not sure how well k8s supports downgrade, but if you have to use
kubeadm, kubicctl does not support this.
Better migrate everything to k8s 1.23.
Thorsten
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Thorsten Kukuk, Distinguished Engineer, Senior Architect
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany
Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
Sorry, just noticed that Proton messed up the title somehow... Anyhow, yes this is a less than ideal state. I think kubicctl could get away with the upgrade. Checked the plan with kubadm first, seems ok, but when I try the update it fails. Kubeadm to the rescue for a more verbose output and got something weird: [ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/coredns/coredns:v1.8.6: output: time="2022-01-25T11:21:48Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = reading manifest v1.8.6 in registry.opensuse.org/kubic/coredns/coredns: name unknown" Which is of course wrong, but where did it get the image is beyond me. Little more debug and hopefully I will be able to stabilize it. Thank you again for the help! -- Br, A.
On Tuesday, January 25th, 2022 at 6:41 PM, Attila Pinter <adathor@protonmail.com> wrote:
On Tuesday, January 25th, 2022 at 5:41 PM, Thorsten Kukuk kukuk@suse.de wrote:
On Tue, Jan 25, Attila Pinter wrote:
A followup question: Is it possible to downgrade nodes with kubicctl? Got a pretty naughty situation on my hand:
dev-k8s-master-1 Ready control-plane,master 46d v1.22.4
dev-k8s-worker-01 Ready <none> 44m v1.23.0
dev-k8s-worker-02 Ready <none> 43m v1.23.0
dev-k8s-worker-03 Ready <none> 45d v1.22.4
dev-k8s-worker-04 Ready <none> 45d v1.22.4
dev-k8s-worker-05 Ready <none> 45d v1.22.4
How could I get the 2 worker nodes from 1.23 to 1.22.4?
Sorry, couldn't find relevant documentation on the subject.
I'm not sure how well k8s supports downgrade, but if you have to use
kubeadm, kubicctl does not support this.
Better migrate everything to k8s 1.23.
Thorsten
Thorsten Kukuk, Distinguished Engineer, Senior Architect
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany
Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
Sorry, just noticed that Proton messed up the title somehow... Anyhow, yes this is a less than ideal state. I think kubicctl could get away with the upgrade. Checked the plan with kubadm first, seems ok, but when I try the update it fails. Kubeadm to the rescue for a more verbose output and got something weird:
[ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/coredns/coredns:v1.8.6: output: time="2022-01-25T11:21:48Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = reading manifest v1.8.6 in registry.opensuse.org/kubic/coredns/coredns: name unknown"
Which is of course wrong, but where did it get the image is beyond me. Little more debug and hopefully I will be able to stabilize it.
Thank you again for the help!
Br,
A.
I'm stuck with this. When I list the images with kubeadm the uri for coredns is incorrect for some reason. This is how it should look like: kubeadm config images list registry.opensuse.org/kubic/kube-apiserver:v1.23.0 registry.opensuse.org/kubic/kube-controller-manager:v1.23.0 registry.opensuse.org/kubic/kube-scheduler:v1.23.0 registry.opensuse.org/kubic/kube-proxy:v1.23.0 registry.opensuse.org/kubic/pause:3.6 registry.opensuse.org/kubic/etcd:3.5.1-0 registry.opensuse.org/kubic/coredns:v1.8.6 This is how it looks like: kubeadm config images list registry.opensuse.org/kubic/kube-apiserver:v1.23.0 registry.opensuse.org/kubic/kube-controller-manager:v1.23.0 registry.opensuse.org/kubic/kube-scheduler:v1.23.0 registry.opensuse.org/kubic/kube-proxy:v1.23.0 registry.opensuse.org/kubic/pause:3.6 registry.opensuse.org/kubic/etcd:3.5.1-0 registry.opensuse.org/kubic/coredns/coredns:v1.8.6 << This is not good :/ Is there a way to manually change the image registry for kubeadm? Can't seem to find the relevant config file for it. -- Br, A.
On Wed, 2022-01-26 at 09:24 +0000, Attila Pinter wrote:
On Tuesday, January 25th, 2022 at 6:41 PM, Attila Pinter <adathor@protonmail.com> wrote:
On Tuesday, January 25th, 2022 at 5:41 PM, Thorsten Kukuk kukuk@suse.de wrote:
On Tue, Jan 25, Attila Pinter wrote:
A followup question: Is it possible to downgrade nodes with kubicctl? Got a pretty naughty situation on my hand:
dev-k8s-master-1 Ready control-plane,master 46d v1.22.4
dev-k8s-worker-01 Ready <none> 44m v1.23.0
dev-k8s-worker-02 Ready <none> 43m v1.23.0
dev-k8s-worker-03 Ready <none> 45d v1.22.4
dev-k8s-worker-04 Ready <none> 45d v1.22.4
dev-k8s-worker-05 Ready <none> 45d v1.22.4
How could I get the 2 worker nodes from 1.23 to 1.22.4?
Sorry, couldn't find relevant documentation on the subject.
I'm not sure how well k8s supports downgrade, but if you have to use
kubeadm, kubicctl does not support this.
Better migrate everything to k8s 1.23.
Thorsten
Thorsten Kukuk, Distinguished Engineer, Senior Architect
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nuernberg, Germany
Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
Sorry, just noticed that Proton messed up the title somehow... Anyhow, yes this is a less than ideal state. I think kubicctl could get away with the upgrade. Checked the plan with kubadm first, seems ok, but when I try the update it fails. Kubeadm to the rescue for a more verbose output and got something weird:
[ERROR ImagePull]: failed to pull image registry.opensuse.org/kubic/coredns/coredns:v1.8.6: output: time="2022-01-25T11:21:48Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = reading manifest v1.8.6 in registry.opensuse.org/kubic/coredns/coredns: name unknown"
Which is of course wrong, but where did it get the image is beyond me. Little more debug and hopefully I will be able to stabilize it.
Thank you again for the help!
Br,
A.
I'm stuck with this. When I list the images with kubeadm the uri for coredns is incorrect for some reason.
This is how it should look like: kubeadm config images list registry.opensuse.org/kubic/kube-apiserver:v1.23.0 registry.opensuse.org/kubic/kube-controller-manager:v1.23.0 registry.opensuse.org/kubic/kube-scheduler:v1.23.0 registry.opensuse.org/kubic/kube-proxy:v1.23.0 registry.opensuse.org/kubic/pause:3.6 registry.opensuse.org/kubic/etcd:3.5.1-0 registry.opensuse.org/kubic/coredns:v1.8.6
This is how it looks like: kubeadm config images list registry.opensuse.org/kubic/kube-apiserver:v1.23.0 registry.opensuse.org/kubic/kube-controller-manager:v1.23.0 registry.opensuse.org/kubic/kube-scheduler:v1.23.0 registry.opensuse.org/kubic/kube-proxy:v1.23.0 registry.opensuse.org/kubic/pause:3.6 registry.opensuse.org/kubic/etcd:3.5.1-0 registry.opensuse.org/kubic/coredns/coredns:v1.8.6 << This is not good :/
Is there a way to manually change the image registry for kubeadm? Can't seem to find the relevant config file for it.
Hi Atilla, Are you sure you're running OUR kubeadm binary? We have a patch which corrects the behaviour you're reporting: https://build.opensuse.org/package/view_file/devel:kubic/kubernetes1.23/reve... This patch appears to be applied truely to our kubernetes1.23 binaries, so either you must be running something else or we've stumbled on a very weird bug where kubeadm isn't using it's own function for locating it's own coredns container... Any light you can shine on it would be great. - Richard -- Richard Brown Linux Distribution Engineer - Future Technology Team SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D-90409 Nuremberg, Germany (HRB 36809, AG Nürnberg) Managing Director/Geschäftsführer: Ivo Totev
On Wednesday, January 26th, 2022 at 5:13 PM, Richard Brown <rbrown@suse.de> wrote:
On Wed, 2022-01-26 at 09:24 +0000, Attila Pinter wrote:
On Tuesday, January 25th, 2022 at 6:41 PM, Attila Pinter
adathor@protonmail.com wrote:
On Tuesday, January 25th, 2022 at 5:41 PM, Thorsten Kukuk
kukuk@suse.de wrote:
On Tue, Jan 25, Attila Pinter wrote:
A followup question: Is it possible to downgrade nodes with
kubicctl? Got a pretty naughty situation on my hand:
dev-k8s-master-1 Ready control-plane,master 46d v1.22.4
dev-k8s-worker-01 Ready <none> 44m v1.23.0
dev-k8s-worker-02 Ready <none> 43m v1.23.0
dev-k8s-worker-03 Ready <none> 45d v1.22.4
dev-k8s-worker-04 Ready <none> 45d v1.22.4
dev-k8s-worker-05 Ready <none> 45d v1.22.4
How could I get the 2 worker nodes from 1.23 to 1.22.4?
Sorry, couldn't find relevant documentation on the subject.
I'm not sure how well k8s supports downgrade, but if you have to
use
kubeadm, kubicctl does not support this.
Better migrate everything to k8s 1.23.
Thorsten
Thorsten Kukuk, Distinguished Engineer, Senior Architect
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409
Nuernberg, Germany
Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
Sorry, just noticed that Proton messed up the title somehow...
Anyhow, yes this is a less than ideal state. I think kubicctl could
get away with the upgrade. Checked the plan with kubadm first,
seems ok, but when I try the update it fails. Kubeadm to the rescue
for a more verbose output and got something weird:
[ERROR ImagePull]: failed to pull image
registry.opensuse.org/kubic/coredns/coredns:v1.8.6: output:
time="2022-01-25T11:21:48Z" level=fatal msg="pulling image: rpc
error: code = Unknown desc = reading manifest v1.8.6 in
registry.opensuse.org/kubic/coredns/coredns: name unknown"
Which is of course wrong, but where did it get the image is beyond
me. Little more debug and hopefully I will be able to stabilize it.
Thank you again for the help!
Br,
A.
I'm stuck with this. When I list the images with kubeadm the uri for
coredns is incorrect for some reason.
This is how it should look like:
kubeadm config images list
registry.opensuse.org/kubic/kube-apiserver:v1.23.0
registry.opensuse.org/kubic/kube-controller-manager:v1.23.0
registry.opensuse.org/kubic/kube-scheduler:v1.23.0
registry.opensuse.org/kubic/kube-proxy:v1.23.0
registry.opensuse.org/kubic/pause:3.6
registry.opensuse.org/kubic/etcd:3.5.1-0
registry.opensuse.org/kubic/coredns:v1.8.6
This is how it looks like:
kubeadm config images list
registry.opensuse.org/kubic/kube-apiserver:v1.23.0
registry.opensuse.org/kubic/kube-controller-manager:v1.23.0
registry.opensuse.org/kubic/kube-scheduler:v1.23.0
registry.opensuse.org/kubic/kube-proxy:v1.23.0
registry.opensuse.org/kubic/pause:3.6
registry.opensuse.org/kubic/etcd:3.5.1-0
registry.opensuse.org/kubic/coredns/coredns:v1.8.6 << This is not
good :/
Is there a way to manually change the image registry for kubeadm?
Can't seem to find the relevant config file for it.
Hi Atilla,
Are you sure you're running OUR kubeadm binary?
We have a patch which corrects the behaviour you're reporting:
https://build.opensuse.org/package/view_file/devel:kubic/kubernetes1.23/reve...
This patch appears to be applied truely to our kubernetes1.23 binaries,
so either you must be running something else or we've stumbled on a
very weird bug where kubeadm isn't using it's own function for locating
it's own coredns container...
Any light you can shine on it would be great.
Hi Richard, Ok, so little back story: I've put this Kubic cluster together when kubicctl still had a little issue, but was already in Devel:Kubic so added that repo. That was about 45 days ago. I'va naturally forgot about it. Noticed that the control plane couldn't update for a while automatically with t-u so switched back to the official repo yesterday and did a dup which seemingly fixed everything. I also added another 2 workers to it which landed with 1.23 into the 1.22 cluster, which is fine. Following Thorsten's advice I rather move forward than backward, but now I get this issue. I've ran a grep -Ri coredns / the whole night, but came up with nothing that would indicate that this issue is coming from one of the local configs are causing this. Or I just missed something which is also absolutely possible. Anyhow I just did a zypper pa after reading your mail and I think something is still not sitting right: i | devel:kubic (openSUSE_Tumbleweed) | cri-o-kubeadm-criconfig | 1.22.0-1.28 | x86_64 v | openSUSE-Tumbleweed-Oss | cri-o-kubeadm-criconfig | 1.22.0-1.4 | x86_64 | openSUSE-Tumbleweed-Oss | docker-kubic-kubeadm-criconfig | 20.10.12_ce-2.1 | x86_64 i | devel:kubic (openSUSE_Tumbleweed) | kubernetes-kubeadm | 1.23.0-32.1 | x86_64 v | openSUSE-Tumbleweed-Oss | kubernetes-kubeadm | 1.23.0-22.2 | x86_64 | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.18-kubeadm | 1.18.20-61.38 | x86_64 | openSUSE-Tumbleweed-Oss | kubernetes1.18-kubeadm | 1.18.20-1.4 | x86_64 | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.19-kubeadm | 1.19.15-38.16 | x86_64 | openSUSE-Tumbleweed-Oss | kubernetes1.19-kubeadm | 1.19.15-2.4 | x86_64 | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.20-kubeadm | 1.20.13-31.1 | x86_64 | openSUSE-Tumbleweed-Oss | kubernetes1.20-kubeadm | 1.20.13-1.3 | x86_64 | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.21-kubeadm | 1.21.7-34.1 | x86_64 | openSUSE-Tumbleweed-Oss | kubernetes1.21-kubeadm | 1.21.7-1.3 | x86_64 | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.22-kubeadm | 1.22.4-9.1 | x86_64 | openSUSE-Tumbleweed-Oss | kubernetes1.22-kubeadm | 1.22.4-2.3 | x86_64 i | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.23-kubeadm | 1.23.0-5.1 | x86_64 v | openSUSE-Tumbleweed-Oss | kubernetes1.23-kubeadm | 1.23.0-2.1 | x86_64 i | devel:kubic (openSUSE_Tumbleweed) | patterns-containers-kubeadm | 5.0-77.10 | x86_64 v | openSUSE-Tumbleweed-Oss | patterns-containers-kubeadm | 5.0-25.2 | x86_64 Will get to fixing this and let's see how it goes. Do let me know if I can provide any further information. -- Br, A.
On Wednesday, January 26th, 2022 at 8:14 PM, Attila Pinter <adathor@protonmail.com> wrote:
On Wednesday, January 26th, 2022 at 5:13 PM, Richard Brown rbrown@suse.de wrote:
On Wed, 2022-01-26 at 09:24 +0000, Attila Pinter wrote:
On Tuesday, January 25th, 2022 at 6:41 PM, Attila Pinter
adathor@protonmail.com wrote:
On Tuesday, January 25th, 2022 at 5:41 PM, Thorsten Kukuk
kukuk@suse.de wrote:
On Tue, Jan 25, Attila Pinter wrote:
A followup question: Is it possible to downgrade nodes with
kubicctl? Got a pretty naughty situation on my hand:
dev-k8s-master-1 Ready control-plane,master 46d v1.22.4
dev-k8s-worker-01 Ready <none> 44m v1.23.0
dev-k8s-worker-02 Ready <none> 43m v1.23.0
dev-k8s-worker-03 Ready <none> 45d v1.22.4
dev-k8s-worker-04 Ready <none> 45d v1.22.4
dev-k8s-worker-05 Ready <none> 45d v1.22.4
How could I get the 2 worker nodes from 1.23 to 1.22.4?
Sorry, couldn't find relevant documentation on the subject.
I'm not sure how well k8s supports downgrade, but if you have to
use
kubeadm, kubicctl does not support this.
Better migrate everything to k8s 1.23.
Thorsten
Thorsten Kukuk, Distinguished Engineer, Senior Architect
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409
Nuernberg, Germany
Managing Director: Ivo Totev (HRB 36809, AG Nürnberg)
Sorry, just noticed that Proton messed up the title somehow...
Anyhow, yes this is a less than ideal state. I think kubicctl could
get away with the upgrade. Checked the plan with kubadm first,
seems ok, but when I try the update it fails. Kubeadm to the rescue
for a more verbose output and got something weird:
[ERROR ImagePull]: failed to pull image
registry.opensuse.org/kubic/coredns/coredns:v1.8.6: output:
time="2022-01-25T11:21:48Z" level=fatal msg="pulling image: rpc
error: code = Unknown desc = reading manifest v1.8.6 in
registry.opensuse.org/kubic/coredns/coredns: name unknown"
Which is of course wrong, but where did it get the image is beyond
me. Little more debug and hopefully I will be able to stabilize it.
Thank you again for the help!
Br,
A.
I'm stuck with this. When I list the images with kubeadm the uri for
coredns is incorrect for some reason.
This is how it should look like:
kubeadm config images list
registry.opensuse.org/kubic/kube-apiserver:v1.23.0
registry.opensuse.org/kubic/kube-controller-manager:v1.23.0
registry.opensuse.org/kubic/kube-scheduler:v1.23.0
registry.opensuse.org/kubic/kube-proxy:v1.23.0
registry.opensuse.org/kubic/pause:3.6
registry.opensuse.org/kubic/etcd:3.5.1-0
registry.opensuse.org/kubic/coredns:v1.8.6
This is how it looks like:
kubeadm config images list
registry.opensuse.org/kubic/kube-apiserver:v1.23.0
registry.opensuse.org/kubic/kube-controller-manager:v1.23.0
registry.opensuse.org/kubic/kube-scheduler:v1.23.0
registry.opensuse.org/kubic/kube-proxy:v1.23.0
registry.opensuse.org/kubic/pause:3.6
registry.opensuse.org/kubic/etcd:3.5.1-0
registry.opensuse.org/kubic/coredns/coredns:v1.8.6 << This is not
good :/
Is there a way to manually change the image registry for kubeadm?
Can't seem to find the relevant config file for it.
Hi Atilla,
Are you sure you're running OUR kubeadm binary?
We have a patch which corrects the behaviour you're reporting:
https://build.opensuse.org/package/view_file/devel:kubic/kubernetes1.23/reve...
This patch appears to be applied truely to our kubernetes1.23 binaries,
so either you must be running something else or we've stumbled on a
very weird bug where kubeadm isn't using it's own function for locating
it's own coredns container...
Any light you can shine on it would be great.
Hi Richard,
Ok, so little back story: I've put this Kubic cluster together when kubicctl still had a little issue, but was already in Devel:Kubic so added that repo. That was about 45 days ago. I'va naturally forgot about it. Noticed that the control plane couldn't update for a while automatically with t-u so switched back to the official repo yesterday and did a dup which seemingly fixed everything.
I also added another 2 workers to it which landed with 1.23 into the 1.22 cluster, which is fine. Following Thorsten's advice I rather move forward than backward, but now I get this issue.
I've ran a grep -Ri coredns / the whole night, but came up with nothing that would indicate that this issue is coming from one of the local configs are causing this. Or I just missed something which is also absolutely possible.
Anyhow I just did a zypper pa after reading your mail and I think something is still not sitting right:
i | devel:kubic (openSUSE_Tumbleweed) | cri-o-kubeadm-criconfig | 1.22.0-1.28 | x86_64
v | openSUSE-Tumbleweed-Oss | cri-o-kubeadm-criconfig | 1.22.0-1.4 | x86_64
| openSUSE-Tumbleweed-Oss | docker-kubic-kubeadm-criconfig | 20.10.12_ce-2.1 | x86_64
i | devel:kubic (openSUSE_Tumbleweed) | kubernetes-kubeadm | 1.23.0-32.1 | x86_64
v | openSUSE-Tumbleweed-Oss | kubernetes-kubeadm | 1.23.0-22.2 | x86_64
| devel:kubic (openSUSE_Tumbleweed) | kubernetes1.18-kubeadm | 1.18.20-61.38 | x86_64
| openSUSE-Tumbleweed-Oss | kubernetes1.18-kubeadm | 1.18.20-1.4 | x86_64
| devel:kubic (openSUSE_Tumbleweed) | kubernetes1.19-kubeadm | 1.19.15-38.16 | x86_64
| openSUSE-Tumbleweed-Oss | kubernetes1.19-kubeadm | 1.19.15-2.4 | x86_64
| devel:kubic (openSUSE_Tumbleweed) | kubernetes1.20-kubeadm | 1.20.13-31.1 | x86_64
| openSUSE-Tumbleweed-Oss | kubernetes1.20-kubeadm | 1.20.13-1.3 | x86_64
| devel:kubic (openSUSE_Tumbleweed) | kubernetes1.21-kubeadm | 1.21.7-34.1 | x86_64
| openSUSE-Tumbleweed-Oss | kubernetes1.21-kubeadm | 1.21.7-1.3 | x86_64
| devel:kubic (openSUSE_Tumbleweed) | kubernetes1.22-kubeadm | 1.22.4-9.1 | x86_64
| openSUSE-Tumbleweed-Oss | kubernetes1.22-kubeadm | 1.22.4-2.3 | x86_64
i | devel:kubic (openSUSE_Tumbleweed) | kubernetes1.23-kubeadm | 1.23.0-5.1 | x86_64
v | openSUSE-Tumbleweed-Oss | kubernetes1.23-kubeadm | 1.23.0-2.1 | x86_64
i | devel:kubic (openSUSE_Tumbleweed) | patterns-containers-kubeadm | 5.0-77.10 | x86_64
v | openSUSE-Tumbleweed-Oss | patterns-containers-kubeadm | 5.0-25.2 | x86_64
Will get to fixing this and let's see how it goes. Do let me know if I can provide any further information.
Br,
A.
Little update: disabled the Devel:Kubic repo, did a zypper dup, done the vendor change for about 40-50 packages, reboot and tada, kubicctl upgrade solved the rest... So yea... pulled an amateur hour. Thorsten, Richard, big thanks for the help and guidance, greatly appreciated. Lessons learnt. -- Br, A.
participants (3)
-
Attila Pinter
-
Richard Brown
-
Thorsten Kukuk