Bug ID 1178868
Summary cilium-k8s-yaml Helm Chart does not deploys a working cilium installation
Classification openSUSE
Product openSUSE Tumbleweed
Version Current
Hardware x86-64
OS Other
Status NEW
Severity Major
Priority P5 - None
Component Kubic
Assignee kubic-bugs@opensuse.org
Reporter contact@ffreitas.io
QA Contact qa-bugs@suse.de
Found By ---
Blocker ---

Hi all,

I was using kubic with cilium just fine using kubic-control builded from source
(master branch). With the latest helm chart of cilium packaged in
cilium-k8s-yaml it does not work anymore. By searching a bit I think there
might be issues with the cilium images from registry.opensuse.org.


# The problem

The latest commit of kubic-control deploys cilium using the helm chert
installed with the package cilium-k8s-yaml.
For a while this chart was not working (some default values were missing) so a
quickfix was made with custom helm values pointing to upstream images.
Recently the chart updated to the version 1.8.5 of cilium and the fix is now
useless. The template is now working without having to add values.
To deploy the following chart without any value we need to build kubic-control
from source and to deploy it on our master node.
We then do the following :
``` bash
echo "#We do not want values here" > /etc/kubic/helm/cilium.yaml
kubicctl init --pod-network=cilium
```

But doing so, the deployed cilium is not working : 
```bash
node-master:~ # kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS         
    RESTARTS   AGE
kube-system   cilium-7sbtc                              0/1    
CrashLoopBackOff    10         30m
kube-system   cilium-operator-cfc74c99b-9wwhs           0/1     Pending        
    0          30m
kube-system   cilium-operator-cfc74c99b-qcqb2           0/1    
ImagePullBackOff    0          30m
kube-system   coredns-76679c6978-dx2xc                  0/1    
ContainerCreating   0          30m
kube-system   coredns-76679c6978-pfp7c                  0/1    
ContainerCreating   0          30m
kube-system   etcd-node-master                      1/1     Running            
0          30m
kube-system   kube-apiserver-node-master            1/1     Running            
0          30m
kube-system   kube-controller-manager-node-master   1/1     Running            
0          30m
kube-system   kube-proxy-bzhrf                          1/1     Running        
    0          30m
kube-system   kube-scheduler-node-master            1/1     Running            
0          30m
kube-system   kured-j5dh8                               0/1    
ContainerCreating   0          30m
```

# cilium operator

The first problem comes from the cilium-operator. The image used in the chart
is registry.opensuse.org/kubic/cilium-operator-generic:1.8.5 put this image
does not exist. It is due to the chart template : 
```bash
node-master:~ # grep gen
/usr/share/k8s-helm/cilium/charts/operator/templates/deployment.yaml
        - cilium-operator-generic
        image: "{{ .Values.global.registry }}/{{ .Values.image }}-generic:{{
.Values.global.tag }}"
```
It can only be fixed by changing the chart or updating the image name in the
registry.

# cilium

We still have the cilium pod in a CrashLoopBackOff. Looking at the logs it
seems to come from the image stored in registry.opensuse.org :
```bash
node-master:~ # kubectl logs -n kube-system cilium-7sbtc | tail
level=info msg="  --tunnel='vxlan'" subsys=daemon
level=info msg="  --version='false'" subsys=daemon
level=info msg="  --write-cni-conf-when-ready=''" subsys=daemon
level=info msg="     _ _ _" subsys=daemon
level=info msg=" ___|_| |_|_ _ _____" subsys=daemon
level=info msg="|  _| | | | | |     |" subsys=daemon
level=info msg="|___|_|_|_|___|_|_|_|" subsys=daemon
level=info msg="Cilium 1.8.5  go version go1.13.15 linux/amd64" subsys=daemon
level=info msg="cilium-envoy  version:
6d0a55191baac475046d13e52ffe330f3a56a4ce/1.14.4/Modified/DEBUG/BoringSSL"
subsys=daemon
level=fatal msg="clang: NOT OK" error="Invalid character(s) found in patch
number \"0\\nTarget:\"" subsys=linux-datapath
```



Is someone already looking at this ? Is there any way for me to help with this
issues ?
Regards, 

Francisco


You are receiving this mail because: