Hello community,
here is the log from the commit of package kubernetes-salt for openSUSE:Factory checked in at 2019-06-01 09:49:29
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/kubernetes-salt (Old)
and /work/SRC/openSUSE:Factory/.kubernetes-salt.new.5148 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "kubernetes-salt"
Sat Jun 1 09:49:29 2019 rev:38 rq:705961 version:4.0.0+git_r1024_6af85a7
Changes:
--------
--- /work/SRC/openSUSE:Factory/kubernetes-salt/kubernetes-salt.changes 2019-02-02 21:48:46.144005777 +0100
+++ /work/SRC/openSUSE:Factory/.kubernetes-salt.new.5148/kubernetes-salt.changes 2019-06-01 09:49:30.539324097 +0200
@@ -1,0 +2,333 @@
+Wed Feb 27 14:35:04 UTC 2019 - Containers Team
+
+- Commit bb22844 by Alvaro Saurin alvaro.saurin@gmail.com
+ Synchronize everythihg before starting an orchestration. Replace all the
+ `mine.get` calls by the more compact `get_with_expr` function.
+
+ bsc#1124784
+
+ Signed-off-by: Alvaro Saurin
+
+
+-------------------------------------------------------------------
+Fri Feb 22 15:50:20 UTC 2019 - Containers Team
+
+- Commit b0a79f7 by Nirmoy Das ndas@suse.de
+ cilium: add repo for cilium
+
+ Signed-off-by: Nirmoy Das
+
+
+-------------------------------------------------------------------
+Thu Feb 21 14:40:16 UTC 2019 - Containers Team
+
+- Commit 0fcce23 by Alvaro Saurin alvaro.saurin@gmail.com
+ When using file.managed, create a temporary file that is in /tmp instead of
+ using the same directory the target file is. This fixes some problems with
+ programs/daemons that could be monitoring that directory.
+
+ bsc#1123716
+
+ Signed-off-by: Alvaro Saurin
+
+
+-------------------------------------------------------------------
+Thu Feb 21 11:38:23 UTC 2019 - Containers Team
+
+- Commit e49af82 by Markos Chandras mchandras@suse.de
+ Jenkinsfile: Update repository information for jenkins-library
+
+
+-------------------------------------------------------------------
+Thu Feb 21 09:26:36 UTC 2019 - Containers Team
+
+- Commit 1e20516 by Florian Bergmann fbergmann@suse.de
+ Add a dummy state to not have an empty state in an orchestration
+
+ This is a workaround for https://github.com/saltstack/salt/issues/14553 when
+ upgrading crio 1.9 to 1.10.
+
+
+-------------------------------------------------------------------
+Wed Feb 20 17:48:50 UTC 2019 - Containers Team
+
+- Commit c67d8f9 by dmaiocchi dmaiocchi@suse.com
+ Improve states stability
+
+ - caasp_etcd.healthy function can fail even if the etcd cluster is
+ healty: adding a retry is better solution for avoding false-failure
+ during orchs.
+
+ - add caasp_service for kubeapi-server.service, with this we are
+ checking 10 times that the service is running in a row.
+ ( having service.running only can cause false failures)
+
+ - fixed some indentation around states.
+
+
+-------------------------------------------------------------------
+Wed Feb 20 15:40:52 UTC 2019 - Containers Team
+
+- Commit 9c06818 by Florian Bergmann fbergmann@suse.de
+ Use iteritems from six import for python2/3 compatibility.
+
+ Fixes bsc#1123497
+
+ Commit 1b21219 by Florian Bergmann fbergmann@suse.de
+ Fix python3 iteration over dictionary.
+
+ In python3 python prevents modifying the dictionary that is iterated over.
+
+ Instead of modifying the dictionary a new one is constructed instead.
+
+ Fixes bsc#1123497
+
+
+-------------------------------------------------------------------
+Wed Feb 20 11:16:23 UTC 2019 - Containers Team
+
+- Commit 78435fc by Jordi Massaguer Pla jmassaguerpla@suse.de
+ use caasp v4 images from SUSE Registry
+
+
+-------------------------------------------------------------------
+Tue Feb 19 16:58:55 UTC 2019 - Containers Team
+
+- Commit b3b4568 by Markos Chandras mchandras@suse.de
+ Jenkinsfile: Switch to dynamic library fetching and drop branch
+
+ Instead of having the library hardcoded to Jenkins master, we can fetch it
+ dynamically. We also drop the usage of library branches since it does not
+ make sense to maintain such a thing in the CI. The master branch should be
+ able to handle both development and release branches.
+
+
+-------------------------------------------------------------------
+Tue Feb 19 16:34:10 UTC 2019 - Containers Team
+
+- Commit 4280cf4 by Maximilian Meister mmeister@suse.de
+ update critical pod configuration
+
+ https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-cr...
+
+ bsc#1122783
+
+ Signed-off-by: Maximilian Meister
+
+
+-------------------------------------------------------------------
+Tue Feb 19 08:59:02 UTC 2019 - Containers Team
+
+- Commit 32d6dbe by Maximilian Meister mmeister@suse.de
+ [bsc#1125095] deployment timeout not correctly configured
+
+ instead of setting the timeout we were only setting the retries which causes
+ the timeout to be prolonged too much
+
+ Signed-off-by: Maximilian Meister
+
+
+-------------------------------------------------------------------
+Tue Feb 19 08:48:43 UTC 2019 - Containers Team
+
+- Commit a5d00a8 by Florian Bergmann fbergmann@suse.de
+ Force basename on the system certificate name to prevent path traversal
+
+
+-------------------------------------------------------------------
+Tue Feb 19 08:16:45 UTC 2019 - Containers Team
+
+- Commit 4f75ad3 by Rafael Fernández López ereslibre@ereslibre.es
+ Make nodename appear first on the /etc/hosts file
+
+ Salt will pick the first name on the current default interface to determine
+ the hostname of the machine. Since we are sorting with all entries for each
+ machine there's a high change that a salt minion id will win the first
+ position, affecting certain grains that we use to determine the hostname of
+ the node.
+
+ Fixes: bsc#1117339
+
+
+-------------------------------------------------------------------
+Tue Feb 19 08:05:23 UTC 2019 - Containers Team
+
+- Commit d0d4384 by Michal Jura mjura@suse.com
+ Enable kube-apiserver authentication to the kubelet (bsc#1121146)
+
+ Kube-apiserver should authenticate to the kubelet with a client certificate
+ and key. This is configured with the --kubelet-client-certificate and
+ --kubelet-client-key flags provided to the API server. Kubelet has to be
+ started with the --client-ca-file flag or clientCAFile option in
+ kubelet-config.yaml file, this is providing a CA bundle to verify client
+ certificates with.
+
+ (cherry picked from commit 6309fb22ae122db6e2db2705fe47c1f4ae939ffb)
+
+ Commit 1b083a4 by Michal Jura mjura@suse.com
+ Disable anonymous access to Kubelet API (bsc#1121146)
+
+ (cherry picked from commit dd88fe82fa8a611db1593025b5c61818e7a61999)
+
+
+-------------------------------------------------------------------
+Mon Feb 18 12:15:26 UTC 2019 - Containers Team
+
+- Commit 42c129a by Panos Georgiadis drpaneas@gmail.com
+ Disable insecure port in kube-apiserver (bsc#1121148)
+
+ * Fixes bnc#1121148 - Critical Security issue for KubeAPI
+ Insecure API port exposed to all Master Node guest containers
+
+ In older versions of Kubernetes, you could run kube-apiserver
+ with an API port that does not have any protections around it.
+
+ This PR disables insecure port by passing the --insecure-port=0
+
+ In recent versions, this has been disabled by default with the
+ intention of completely deprecating it
+
+ (cherry picked from commit 01d91482e9a84b05b3b6eaec6a94b7b19ee74ee4)
+
+
+-------------------------------------------------------------------
+Thu Feb 14 16:29:12 UTC 2019 - Containers Team
+
+- Commit cb017ed by Alvaro Saurin alvaro.saurin@gmail.com
+ Use a writable directory for volume plugins Use the same volumes plugins
++++ 139 more lines (skipped)
++++ between /work/SRC/openSUSE:Factory/kubernetes-salt/kubernetes-salt.changes
++++ and /work/SRC/openSUSE:Factory/.kubernetes-salt.new.5148/kubernetes-salt.changes
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ kubernetes-salt.spec ++++++
--- /var/tmp/diff_new_pack.kQXe08/_old 2019-06-01 09:49:31.067323917 +0200
+++ /var/tmp/diff_new_pack.kQXe08/_new 2019-06-01 09:49:31.071323916 +0200
@@ -21,8 +21,8 @@
%endif
%if 0%{?suse_version} >= 1500 && !0%{?is_opensuse}
- # Use the sles12 images from the registry
- %define _base_image registry.suse.de/devel/casp/3.0/controllernode/images_container_base/sles12
+ # Use the caasp images from the registry
+ %define _base_image registry.suse.com/caasp/v4
%endif
%if 0%{?is_opensuse} && 0%{?suse_version} > 1500
@@ -33,7 +33,7 @@
Name: kubernetes-salt
%define gitrepo salt
-Version: 4.0.0+git_r967_4dfc00f
+Version: 4.0.0+git_r1024_6af85a7
Release: 0
BuildArch: noarch
Summary: Production-Grade Container Scheduling and Management
++++++ master.tar.gz ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/Jenkinsfile new/salt-master/Jenkinsfile
--- old/salt-master/Jenkinsfile 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/Jenkinsfile 2019-02-27 15:33:22.000000000 +0100
@@ -1,5 +1,6 @@
-def targetBranch = env.getEnvironment().get('CHANGE_TARGET', env.BRANCH_NAME)
-
-library "kubic-jenkins-library@${targetBranch}"
+library identifier: "kubic-jenkins-library@master", retriever: modernSCM(
+ [$class: 'GitSCMSource',
+ remote: 'https://github.com/suse/caasp-jenkins-library.git',
+ credentialsId: 'github-token'])
coreKubicProjectCi()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/Jenkinsfile.flake8 new/salt-master/Jenkinsfile.flake8
--- old/salt-master/Jenkinsfile.flake8 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/Jenkinsfile.flake8 2019-02-27 15:33:22.000000000 +0100
@@ -1,6 +1,7 @@
-def targetBranch = env.getEnvironment().get('CHANGE_TARGET', env.BRANCH_NAME)
-
-library "kubic-jenkins-library@${targetBranch}"
+library identifier: "kubic-jenkins-library@master", retriever: modernSCM(
+ [$class: 'GitSCMSource',
+ remote: 'https://github.com/suse/caasp-jenkins-library.git',
+ credentialsId: 'github-token'])
// TODO: Don't hardcode salt repo name, find the right place
// to lookup this information dynamically.
@@ -10,33 +11,17 @@
user: env.CHANGE_AUTHOR,
credentialsId: 'github-token')
-def label = "salt-flake8-${UUID.randomUUID().toString()}"
-
-podTemplate(label: label, containers: [
- containerTemplate(
- name: 'tox',
- image: 'registry.suse.de/devel/casp/ci/opensuse_leap_42.3_containers/jenkins-tox-container:latest',
- alwaysPullImage: true,
- ttyEnabled: true,
- command: 'cat',
- envVars: [
- envVar(key: 'http_proxy', value: env.http_proxy),
- envVar(key: 'https_proxy', value: env.http_proxy),
- ],
- ),
-]) {
- node(label) {
- stage('Retrieve Code') {
- checkout scm
- }
+node("leap15.0&&caasp-pr-worker") {
+ stage('Retrieve Code') {
+ checkout scm
+ }
+ docker.image('registry.suse.de/devel/casp/ci/opensuse_leap_42.3_containers/jenkins-tox-container:latest').inside('-v ${WORKSPACE}:/salt') {
stage('Style Checks') {
- container('tox') {
- try {
- sh 'tox -e flake8 -- --format=junit-xml --output-file junit.xml'
- } finally {
- junit "junit.xml"
- }
+ try {
+ sh(script: 'tox -e flake8 -- --format=junit-xml --output-file junit.xml')
+ } finally {
+ junit "junit.xml"
}
}
}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/Jenkinsfile.housekeeping new/salt-master/Jenkinsfile.housekeeping
--- old/salt-master/Jenkinsfile.housekeeping 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/Jenkinsfile.housekeeping 2019-02-27 15:33:22.000000000 +0100
@@ -1,5 +1,6 @@
-def targetBranch = env.getEnvironment().get('CHANGE_TARGET', env.BRANCH_NAME)
-
-library "kubic-jenkins-library@${targetBranch}"
+library identifier: "kubic-jenkins-library@master", retriever: modernSCM(
+ [$class: 'GitSCMSource',
+ remote: 'https://github.com/suse/caasp-jenkins-library.git',
+ credentialsId: 'github-token'])
coreKubicProjectHousekeeping()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/Jenkinsfile.tests new/salt-master/Jenkinsfile.tests
--- old/salt-master/Jenkinsfile.tests 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/Jenkinsfile.tests 2019-02-27 15:33:22.000000000 +0100
@@ -1,6 +1,7 @@
-def targetBranch = env.getEnvironment().get('CHANGE_TARGET', env.BRANCH_NAME)
-
-library "kubic-jenkins-library@${targetBranch}"
+library identifier: "kubic-jenkins-library@master", retriever: modernSCM(
+ [$class: 'GitSCMSource',
+ remote: 'https://github.com/suse/caasp-jenkins-library.git',
+ credentialsId: 'github-token'])
// TODO: Don't hardcode salt repo name, find the right place
// to lookup this information dynamically.
@@ -10,48 +11,19 @@
user: env.CHANGE_AUTHOR,
credentialsId: 'github-token')
-def label = "salt-tests-${UUID.randomUUID().toString()}"
-
-podTemplate(label: label, containers: [
- containerTemplate(
- name: 'tox',
- image: 'registry.suse.de/devel/casp/ci/opensuse_leap_42.3_containers/jenkins-tox-container:latest',
- alwaysPullImage: true,
- ttyEnabled: true,
- command: 'cat',
- envVars: [
- envVar(key: 'http_proxy', value: env.http_proxy),
- envVar(key: 'https_proxy', value: env.http_proxy),
- ],
- ),
- containerTemplate(
- name: 'tox3',
- image: 'registry.suse.de/devel/casp/ci/opensuse_leap_42.3_containers/jenkins-tox-container:latest',
- alwaysPullImage: true,
- ttyEnabled: true,
- command: 'cat',
- envVars: [
- envVar(key: 'http_proxy', value: env.http_proxy),
- envVar(key: 'https_proxy', value: env.http_proxy),
- ],
- ),
-]) {
- node(label) {
- stage('Retrieve Code') {
- checkout scm
- }
-
+node("leap15.0&&caasp-pr-worker") {
+ stage('Retrieve Code') {
+ checkout scm
+ }
+
+ docker.image('registry.suse.de/devel/casp/ci/opensuse_leap_42.3_containers/jenkins-tox-container:latest').inside('-v ${WORKSPACE}:/salt') {
stage('Create Test Virtualenv') {
parallel(
'Python 2.7': {
- container('tox') {
- sh 'tox --notest -e tests-salt-2018.3.0-py27'
- }
+ sh(script: 'tox --notest -e tests-salt-2018.3.0-py27')
},
'Python 3.4': {
- container('tox3') {
- sh 'tox --notest -e tests-salt-2018.3.0-py34'
- }
+ sh(script: 'tox --notest -e tests-salt-2018.3.0-py34')
}
)
}
@@ -59,21 +31,17 @@
stage('Run Tests') {
parallel(
'Python 2.7': {
- container('tox') {
- try {
- sh 'tox -e tests-salt-2018.3.0-py27 -- --with-xunit --xunit-testsuite-name=salt-2018.3.0-py27 --xunit-file=tests-salt-2018.3.0-py27.xml'
- } finally {
- junit "tests-salt-2018.3.0-py27.xml"
- }
+ try {
+ sh(script: 'tox -e tests-salt-2018.3.0-py27 -- --with-xunit --xunit-testsuite-name=salt-2018.3.0-py27 --xunit-file=tests-salt-2018.3.0-py27.xml')
+ } finally {
+ junit "tests-salt-2018.3.0-py27.xml"
}
},
'Python 3.4': {
- container('tox3') {
- try {
- sh 'tox -e tests-salt-2018.3.0-py34 -- --with-xunit --xunit-testsuite-name=salt-2018.3.0-py34 --xunit-file=tests-salt-2018.3.0-py34.xml'
- } finally {
- junit "tests-salt-2018.3.0-py34.xml"
- }
+ try {
+ sh(script: 'tox -e tests-salt-2018.3.0-py34 -- --with-xunit --xunit-testsuite-name=salt-2018.3.0-py34 --xunit-file=tests-salt-2018.3.0-py34.xml')
+ } finally {
+ junit "tests-salt-2018.3.0-py34.xml"
}
}
)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/README.md new/salt-master/README.md
--- old/salt-master/README.md 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/README.md 2019-02-27 15:33:22.000000000 +0100
@@ -5,12 +5,21 @@
# Running Tests
-## Style Checks:
+First of all, you have to install tox:
-Install tox, and run the style checks:
+ % zypper in python-tox
- zypper in python-tox
- tox -e flake8
+After that, from the root of the project you can run style checks:
+
+ $ tox -e flake8
+
+And unit tests:
+
+ $ tox -e tests-salt-2018.3.0-py27
+
+If you want to run everything simply perform:
+
+ $ tox
# Salt states and CaaSP architecture
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/packaging/suse/make_spec.sh new/salt-master/packaging/suse/make_spec.sh
--- old/salt-master/packaging/suse/make_spec.sh 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/packaging/suse/make_spec.sh 2019-02-27 15:33:22.000000000 +0100
@@ -43,8 +43,8 @@
%endif
%if 0%{?suse_version} >= 1500 && !0%{?is_opensuse}
- # Use the sles12 images from the registry
- %define _base_image registry.suse.de/devel/casp/3.0/controllernode/images_container_base/sles12
+ # Use the caasp images from the registry
+ %define _base_image registry.suse.com/caasp/v4
%endif
%if 0%{?is_opensuse} && 0%{?suse_version} > 1500
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/pillar/certificates.sls new/salt-master/pillar/certificates.sls
--- old/salt-master/pillar/certificates.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/pillar/certificates.sls 2019-02-27 15:33:22.000000000 +0100
@@ -40,6 +40,9 @@
kube_apiserver_key: '/etc/pki/kube-apiserver.key'
kube_apiserver_crt: '/etc/pki/kube-apiserver.crt'
+ kube_apiserver_kubelet_client_key: '/etc/pki/kube-apiserver-kubelet-client.key'
+ kube_apiserver_kubelet_client_crt: '/etc/pki/kube-apiserver-kubelet-client.crt'
+
kube_apiserver_proxy_client_key: '/etc/pki/kube-apiserver-proxy-client.key'
kube_apiserver_proxy_client_crt: '/etc/pki/kube-apiserver-proxy-client.crt'
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/pillar/cni.sls new/salt-master/pillar/cni.sls
--- old/salt-master/pillar/cni.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/pillar/cni.sls 2019-02-27 15:33:22.000000000 +0100
@@ -16,6 +16,9 @@
# 7 - Display HTTP request headers.
# 8 - Display HTTP request contents.
log_level: '2'
+# cilium configuration
+cilium:
+ image: 'cilium:1.2.1'
# CNI network configuration
cni:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/pillar/top.sls new/salt-master/pillar/top.sls
--- old/salt-master/pillar/top.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/pillar/top.sls 2019-02-27 15:33:22.000000000 +0100
@@ -10,6 +10,7 @@
- docker
- registries
- schedule
+ - volume
'roles:ca':
- match: grain
- ca
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/pillar/volume.sls new/salt-master/pillar/volume.sls
--- old/salt-master/pillar/volume.sls 1970-01-01 01:00:00.000000000 +0100
+++ new/salt-master/pillar/volume.sls 2019-02-27 15:33:22.000000000 +0100
@@ -0,0 +1,5 @@
+
+volume:
+ dirs:
+ bin: '/var/lib/kubelet/plugins'
+
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/_macros/kubectl.jinja new/salt-master/salt/_macros/kubectl.jinja
--- old/salt-master/salt/_macros/kubectl.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/_macros/kubectl.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -96,6 +96,7 @@
availableReplicas=$(kubectl --request-timeout=1m --kubeconfig={{ pillar['paths']['kubeconfig'] }} get deployment {{ deployment }} --namespace={{ namespace }} --template {{ '{{.status.availableReplicas}}' }})
updatedReplicas=$(kubectl --request-timeout=1m --kubeconfig={{ pillar['paths']['kubeconfig'] }} get deployment {{ deployment }} --namespace={{ namespace }} --template {{ '{{.status.updatedReplicas}}' }})
[ "$readyReplicas" == "$desiredReplicas" ] && [ "$availableReplicas" == "$desiredReplicas" ] && [ "$updatedReplicas" == "$desiredReplicas" ]
+ - timeout: {{ timeout }}
- retry:
attempts: {{ timeout }}
interval: 1
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/_modules/caasp_filters.py new/salt-master/salt/_modules/caasp_filters.py
--- old/salt-master/salt/_modules/caasp_filters.py 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/_modules/caasp_filters.py 2019-02-27 15:33:22.000000000 +0100
@@ -1,6 +1,13 @@
from __future__ import absolute_import
-from salt._compat import ipaddress
+import os
+import socket
+
+# TODO: in Python 3 there is an ipaddress module which works out of the box. In
+# fact, Salt is using this module when running in Python 3. For Python 2 Salt
+# is using a custom implementation, which is buggy on some checks. Thus,
+# whenever we jump into Python3, we should consider using either
+# salt._compat.ipaddress, or the module from Python 3.
def __virtual__():
@@ -11,8 +18,6 @@
'''
Returns a bool telling if the passed IP is a valid IPv4 or IPv6 address.
'''
- # TODO: use the builtin filter (https://docs.saltstack.com/en/latest/topics/jinja/index.html#is-ip)
- # once we Salt>2017.7.0
return is_ipv4(ip) or is_ipv6(ip)
@@ -20,11 +25,10 @@
'''
Returns a bool telling if the value passed to it was a valid IPv4 address
'''
- # TODO: use the builtin filter (https://docs.saltstack.com/en/latest/topics/jinja/index.html#is-ipv4)
- # once we Salt>2017.7.0
try:
- return ipaddress.ip_address(ip).version == 4
- except ValueError:
+ socket.inet_pton(socket.AF_INET, ip)
+ return True
+ except socket.error:
return False
@@ -32,15 +36,23 @@
'''
Returns a bool telling if the value passed to it was a valid IPv6 address
'''
- # TODO: use the builtin filter (https://docs.saltstack.com/en/latest/topics/jinja/index.html#is-ipv6)
- # once we Salt>2017.7.0
try:
- return ipaddress.ip_address(ip).version == 6
- except ValueError:
+ socket.inet_pton(socket.AF_INET6, ip)
+ return True
+ except socket.error:
return False
def get_max(seq):
- # TODO: use the builtin filter (https://docs.saltstack.com/en/latest/topics/jinja/index.html#max)
+ # TODO: use the builtin filter
+ # (https://docs.saltstack.com/en/latest/topics/jinja/index.html#max)
# once we Salt>2017.7.0
return max(seq)
+
+
+def basename(filename):
+ '''
+ Wrapper around os.path.basename for use in jinja templates.
+ '''
+ # Return the last path segment
+ return os.path.basename(filename)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/_modules/caasp_hosts.py new/salt-master/salt/_modules/caasp_hosts.py
--- old/salt-master/salt/_modules/caasp_hosts.py 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/_modules/caasp_hosts.py 2019-02-27 15:33:22.000000000 +0100
@@ -68,13 +68,25 @@
return "caasp_hosts"
-def _concat(lst1, lst2):
+# returns a list resulting of appending `lst2` to `lst1`, removing duplicates on
+# both lists (not preserving order on any of them) and removing empty elements on
+# the result. The result will be sorted as well
+def _sorted_append(lst1, lst2):
res = list(set(lst1) | set(lst2)) # join both lists (without dups)
res = [x for x in res if x] # remove empty strings
res.sort() # sort the result (for determinism)
return res
+# returns a list resulting of prepending `lst2` to `lst1`, not removing
+# duplicates on any of both lists (preserving order on both of them) and
+# removing empty elements on the result
+def _unsorted_prepend(lst1, lst2):
+ res = lst2 + lst1 # unsorted prepend of lst2 in lst1
+ res = [x for x in res if x] # remove empty strings
+ return res
+
+
def _load_lines(filename):
__utils__['caasp_log.debug']('hosts: loading %s', filename)
with open(filename, 'r') as f:
@@ -141,7 +153,7 @@
# add a (list of) name(s) to a (maybe existing) IP
# it will remove duplicates, sort names, etc...
-def _add_names(hosts, ips, names):
+def _add_names(hosts, ips, names, insert_fun=_sorted_append):
if not isinstance(names, list):
names = [names]
if not isinstance(ips, list):
@@ -150,20 +162,25 @@
for ip in ips:
__utils__['caasp_log.debug']('hosts: adding %s -> %s', ip, names)
if ip not in hosts:
- hosts[ip] = _concat([], names)
+ hosts[ip] = insert_fun([], names)
else:
- hosts[ip] = _concat(hosts[ip], names)
+ hosts[ip] = insert_fun(hosts[ip], names)
-def _add_names_for(hosts, nodes_dict, infra_domain):
+def _add_names_for(hosts, nodes_dict, infra_domain, insert_fun=_sorted_append):
for id, ifaces in nodes_dict.items():
ip = __salt__['caasp_net.get_primary_ip'](host=id, ifaces=ifaces)
if ip:
- _add_names(hosts, ip, [id, id + '.' + infra_domain])
+ _add_names(hosts, ip, [id, id + '.' + infra_domain], insert_fun)
+
+def _add_nodenames_for(hosts, nodes_dict, infra_domain):
+ for id, ifaces in nodes_dict.items():
+ ip = __salt__['caasp_net.get_primary_ip'](host=id, ifaces=ifaces)
+ if ip:
nodename = __salt__['caasp_net.get_nodename'](host=id)
if nodename:
- _add_names(hosts, ip, [nodename, nodename + '.' + infra_domain])
+ _add_names(hosts, ip, [nodename, nodename + '.' + infra_domain], _unsorted_prepend)
# note regarding node removals:
@@ -271,12 +288,15 @@
def get_with_expr(expr):
return __salt__['caasp_nodes.get_with_expr'](expr, grain='network.interfaces')
+ admin_nodes = admin_nodes or get_with_expr(ADMIN_EXPR)
+ master_nodes = master_nodes or get_with_expr(MASTER_EXPR)
+ worker_nodes = worker_nodes or get_with_expr(WORKER_EXPR)
+ other_nodes = other_nodes or get_with_expr(OTHER_EXPR)
+
# add all the entries
try:
- _add_names_for(hosts, admin_nodes or get_with_expr(ADMIN_EXPR), infra_domain)
- _add_names_for(hosts, master_nodes or get_with_expr(MASTER_EXPR), infra_domain)
- _add_names_for(hosts, worker_nodes or get_with_expr(WORKER_EXPR), infra_domain)
- _add_names_for(hosts, other_nodes or get_with_expr(OTHER_EXPR), infra_domain)
+ for nodes in [admin_nodes, master_nodes, worker_nodes, other_nodes]:
+ _add_names_for(hosts, nodes, infra_domain)
except Exception as e:
raise EtcHostsRuntimeException(
'Could not add entries for roles in /etc/hosts: {}'.format(e))
@@ -308,19 +328,30 @@
raise EtcHostsRuntimeException(
'Could not add special entries in /etc/hosts: {}'.format(e))
+ # sort the names for determinism
+ for ip, names in hosts.items():
+ names.sort()
+
+ # prepend the nodenames at the beginning of each entry
+ try:
+ for nodes in [admin_nodes, master_nodes, worker_nodes, other_nodes]:
+ _add_nodenames_for(hosts, nodes, infra_domain)
+ except Exception as e:
+ raise EtcHostsRuntimeException(
+ 'Could not add nodenames entries in /etc/hosts: {}'.format(e))
+
# (over)write the /etc/hosts
try:
preface = PREFACE.format(file=caasp_hosts_file).splitlines()
new_etc_hosts_contents = []
for ip, names in hosts.items():
- names.sort()
line = '{0} {1}'.format(ip, ' '.join(names))
new_etc_hosts_contents.append(line.strip().replace('\n', ''))
new_etc_hosts_contents.sort()
new_etc_hosts_contents = preface + new_etc_hosts_contents
- __utils__['caasp_log.info']('hosts: writting new content to %s', orig_etc_hosts)
+ __utils__['caasp_log.info']('hosts: writing new content to %s', orig_etc_hosts)
_write_lines(orig_etc_hosts, new_etc_hosts_contents)
except Exception as e:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/_modules/caasp_net.py new/salt-master/salt/_modules/caasp_net.py
--- old/salt-master/salt/_modules/caasp_net.py 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/_modules/caasp_net.py 2019-02-27 15:33:22.000000000 +0100
@@ -4,6 +4,7 @@
#
from __future__ import absolute_import
+from salt.ext.six import iteritems
DEFAULT_INTERFACE = 'eth0'
@@ -23,10 +24,9 @@
:return: Dict(ip_address -> [aliases])
"""
hosts = __salt__['hosts.list_hosts']() # type: dict
- for key in hosts.keys():
- if key.startswith("comment-"):
- hosts.pop(key)
- return hosts
+ # Python3 (correctly) prevents modifying a dictionary we are iterating
+ # over, so we have to copy it first, so we can remove comment entries.
+ return {k: v for k, v in hosts.items() if not k.startswith("comment-")}
def get_aliases(hostname):
@@ -36,7 +36,7 @@
:return: [string]
"""
hosts = get_hosts()
- for host, aliases in hosts.iteritems():
+ for host, aliases in iteritems(hosts):
if hostname in aliases:
__utils__['caasp_log.debug']('CaaS: retrieved aliases %s for %s', aliases, hostname)
return aliases
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/_modules/caasp_orch.py new/salt-master/salt/_modules/caasp_orch.py
--- old/salt-master/salt/_modules/caasp_orch.py 1970-01-01 01:00:00.000000000 +0100
+++ new/salt-master/salt/_modules/caasp_orch.py 2019-02-27 15:33:22.000000000 +0100
@@ -0,0 +1,16 @@
+from __future__ import absolute_import
+
+
+def __virtual__():
+ return "caasp_orch"
+
+
+def sync_all():
+ '''
+ Syncronize everything before starting a new orchestration
+ '''
+ __utils__['caasp_log.debug']('orch: refreshing all')
+ __salt__['saltutil.sync_all'](refresh=True)
+
+ __utils__['caasp_log.debug']('orch: synchronizing the mine')
+ __salt__['saltutil.runner']('mine.update', tgt='*', clear=True)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/_modules/tests/test_caasp_filters.py new/salt-master/salt/_modules/tests/test_caasp_filters.py
--- old/salt-master/salt/_modules/tests/test_caasp_filters.py 1970-01-01 01:00:00.000000000 +0100
+++ new/salt-master/salt/_modules/tests/test_caasp_filters.py 2019-02-27 15:33:22.000000000 +0100
@@ -0,0 +1,59 @@
+from __future__ import absolute_import
+
+import unittest
+
+import caasp_filters
+
+
+class TestIsIP(unittest.TestCase):
+ def test_is_ipv4(self):
+ # Valid IPv4 addresses.
+ self.assertTrue(caasp_filters.is_ipv4("127.0.0.1"))
+ self.assertTrue(caasp_filters.is_ipv4("192.168.23.1"))
+ self.assertTrue(caasp_filters.is_ipv4("192.168.23.255"))
+ self.assertTrue(caasp_filters.is_ipv4("255.255.255.255"))
+ self.assertTrue(caasp_filters.is_ipv4("0.0.0.0"))
+
+ # Invalid IPv4 addresses.
+ self.assertFalse(caasp_filters.is_ipv4("30.168.1.255.1"))
+ self.assertFalse(caasp_filters.is_ipv4("127.1"))
+ self.assertFalse(caasp_filters.is_ipv4("-1.0.2.3"))
+ self.assertFalse(caasp_filters.is_ipv4("3...3"))
+ self.assertFalse(caasp_filters.is_ipv4("whatever"))
+
+ # see bsc#1123291
+ self.assertFalse(caasp_filters.is_ipv4("master85.test.net"))
+
+ def test_is_ipv6(self):
+ self.assertTrue(
+ caasp_filters.is_ipv6("1111:2222:3333:4444:5555:6666:7777:8888")
+ )
+ self.assertTrue(
+ caasp_filters.is_ipv6("1111:2222:3333:4444:5555:6666:7777::")
+ )
+ self.assertTrue(caasp_filters.is_ipv6("::"))
+ self.assertTrue(caasp_filters.is_ipv6("::8888"))
+
+ self.assertFalse(
+ caasp_filters.is_ipv6("11112222:3333:4444:5555:6666:7777:8888")
+ )
+ self.assertFalse(caasp_filters.is_ipv6("1111:"))
+ self.assertFalse(caasp_filters.is_ipv6("::."))
+
+
+class TestCaaspFilters(unittest.TestCase):
+ '''
+ Some basic tests for caasp_pillar.get()
+ '''
+
+ def test_basename(self):
+ self.assertEqual("hello", caasp_filters.basename("../hello"))
+ self.assertEqual("world", caasp_filters.basename("../hello/world"))
+ self.assertEqual("", caasp_filters.basename("./"))
+ self.assertEqual("", caasp_filters.basename("../"))
+ self.assertEqual("", caasp_filters.basename("/"))
+ self.assertEqual(".", caasp_filters.basename("."))
+
+
+if __name__ == '__main__':
+ unittest.main()
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/_modules/tests/test_caasp_hosts.py new/salt-master/salt/_modules/tests/test_caasp_hosts.py
--- old/salt-master/salt/_modules/tests/test_caasp_hosts.py 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/_modules/tests/test_caasp_hosts.py 2019-02-27 15:33:22.000000000 +0100
@@ -56,22 +56,22 @@
caasp_hosts.__utils__ = Utils()
ips = {
- 'admin': '10.10.10.1',
- 'master0': '10.10.10.2',
- 'minion1': '10.10.10.3',
- 'other0': '10.10.10.4'
+ 'admin-minion-id': '10.10.10.1',
+ 'master0-minion-id': '10.10.10.2',
+ 'minion1-minion-id': '10.10.10.3',
+ 'other0-minion-id': '10.10.10.4'
}
- admin_nodes = {'admin': 'eth0'}
- master_nodes = {'master0': 'eth0'}
- worker_nodes = {'minion1': 'eth0'}
- other_nodes = {'other0': 'eth0'}
+ admin_nodes = {'admin-minion-id': 'eth0'}
+ master_nodes = {'master0-minion-id': 'eth0'}
+ worker_nodes = {'minion1-minion-id': 'eth0'}
+ other_nodes = {'other0-minion-id': 'eth0'}
def mock_get_primary_ip(host, ifaces):
return ips[host]
def mock_get_nodename(host):
- return host
+ return "nodename-{}".format(host)
def mock_get_pillar(s, default=None):
return {
@@ -185,13 +185,17 @@
caasp_hosts._load_hosts_file(new_etc_hosts_contents,
etc_hosts.name)
+ def check_entry_strict(ip, names):
+ self.assertIn(ip, new_etc_hosts_contents)
+ self.assertEqual(names, new_etc_hosts_contents[ip])
+
def check_entry(ip, names):
self.assertIn(ip, new_etc_hosts_contents)
for name in names:
self.assertIn(name, new_etc_hosts_contents[ip])
# check the Admin node has the right entries
- check_entry('10.10.10.1', ['admin',
+ check_entry('10.10.10.1', ['admin-minion-id',
'some-other-name-for-admin'])
# check we are setting the right things in 127.0.0.1
@@ -208,6 +212,15 @@
self.assertNotIn(ip, new_etc_hosts_contents)
#
+ # story: nodenames are appended at the beginning of the line
+ #
+
+ check_entry_strict('10.10.10.2', ['nodename-master0-minion-id',
+ 'nodename-master0-minion-id.infra.caasp.local',
+ 'master0-minion-id',
+ 'master0-minion-id.infra.caasp.local'])
+
+ #
# story: this host is highstated again
# we must check the idempotency of 'caasp_hosts'
#
@@ -256,7 +269,7 @@
check_entry('10.10.23.8', ['bar.server.com'])
# repeat previous checks
- check_entry('10.10.10.1', ['admin',
+ check_entry('10.10.10.1', ['admin-minion-id',
'some-other-name-for-admin'])
check_entry('127.0.0.1', ['api', 'api.infra.caasp.local',
EXTERNAL_MASTER_NAME,
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/_states/caasp_etcd.py new/salt-master/salt/_states/caasp_etcd.py
--- old/salt-master/salt/_states/caasp_etcd.py 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/_states/caasp_etcd.py 2019-02-27 15:33:22.000000000 +0100
@@ -47,13 +47,13 @@
def healthy(name, **kwargs):
result = {'name': "healthy.{0}".format(name),
'result': True,
- 'comment': "Cluster is healthy",
+ 'comment': "etcd cluster is healthy",
'changes': {}}
if not __salt__['caasp_etcd.healthy'](**kwargs):
result.update({
'result': False,
- 'comment': "Cluster is not healthy"
+ 'comment': "etcd cluster is not healthy"
})
return result
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/_states/caasp_file.py new/salt-master/salt/_states/caasp_file.py
--- old/salt-master/salt/_states/caasp_file.py 1970-01-01 01:00:00.000000000 +0100
+++ new/salt-master/salt/_states/caasp_file.py 2019-02-27 15:33:22.000000000 +0100
@@ -0,0 +1,76 @@
+from __future__ import absolute_import
+
+import salt.utils.files
+
+
+def managed(name, **kwargs):
+ '''
+ Manage a given file, this function allows for a file to be downloaded from
+ the salt master and potentially run through a templating system.
+
+ This is a wrapper on the standard :py:func:`file.managed `
+ state where we can specify a `work_dir` that will be used for creating the temporary
+ file, as the standard version creates a temporary file in the same directory as `name`,
+ and that can lead to some problems with programs/daemons that are watching that
+ directory (like the kubelet with `/etc/kubernetes/manifests`).
+
+ work_dir
+ A directory for creating temporary files.
+
+ For a full list of arguments see :py:func:`file.managed `
+ '''
+ def debug(s):
+ __utils__['caasp_log.debug']('CaaS: caasp_file.managed: {}: '.format(name) + s)
+
+ def error(s):
+ return dict(naame=name, result=False, comment=s, changes={})
+
+ work_dir = kwargs.pop('work_dir', None)
+ if not work_dir:
+ # if no work_dir has been specified, invoke the regular `managed`
+ return __states__['file.managed'](name=name, **kwargs)
+
+ debug('using working dir {} for managed file {}'.format(work_dir, name))
+
+ # 1. create a temporary file, , in
+ tmp_filename = salt.utils.files.mkstemp(dir=work_dir)
+
+ try:
+ # 2. if there is an existing file <name>, copy it to this
+ if __salt__['file.file_exists'](name):
+ debug('copying existing {} to temporary file {}'.format(name, tmp_filename))
+ try:
+ # copy the existing file to /tmp/<name>
+ __salt__['file.copy'](name, tmp_filename)
+ except Exception as exc:
+ return error('Unable to copy file {0} to {1}: {2}'.format(name, tmp_filename, exc))
+
+ # 3. manage the
+ debug('creating temporary file {}'.format(tmp_filename))
+ ret_managed = __states__['file.managed'](name=tmp_filename, **kwargs)
+ if not ret_managed['result']:
+ return error('Error when creating temporary file {} for {}'.format(tmp_filename, name))
+ changes_managed = ret_managed['changes']
+
+ # 4. finally, copy the to the final destination <name>
+ debug('copying temporary file {} to {}'.format(tmp_filename, name))
+ ret_copy = __states__['file.copy'](name=name,
+ source=tmp_filename,
+ force=True,
+ makedirs=False,
+ preserve=True,
+ subdir=False)
+ if not ret_copy['result']:
+ return error('Error when creating temporary file {} for {}'.format(tmp_filename, name))
+
+ # 5. return the `managed` we run in the tmp_filename, but tweaking some things
+ return {
+ 'name': name,
+ 'changes': changes_managed,
+ 'result': True,
+ 'comment': ret_managed['comment'].replace(tmp_filename, name)
+ }
+
+ finally:
+ debug('removing temporary file {}'.format(tmp_filename))
+ salt.utils.files.remove(tmp_filename)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/addons/dex/manifests/20-deployment.yaml new/salt-master/salt/addons/dex/manifests/20-deployment.yaml
--- old/salt-master/salt/addons/dex/manifests/20-deployment.yaml 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/addons/dex/manifests/20-deployment.yaml 2019-02-27 15:33:22.000000000 +0100
@@ -19,7 +19,6 @@
labels:
app: dex
annotations:
- scheduler.alpha.kubernetes.io/critical-pod: ''
# Kubernetes will not restart dex when the configmap or secret changes, and
# dex will not notice anything has been changed either. By storing the checksum
# within an annotation, we force Kubernetes to perform the rolling restart
@@ -33,8 +32,9 @@
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- - key: "CriticalAddonsOnly"
- operator: "Exists"
+
+ # prevent evictions
+ priorityClassName: system-node-critical
# ensure dex pods are running on different hosts
affinity:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/addons/dns/manifests/20-deployment.yaml new/salt-master/salt/addons/dns/manifests/20-deployment.yaml
--- old/salt-master/salt/addons/dns/manifests/20-deployment.yaml 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/addons/dns/manifests/20-deployment.yaml 2019-02-27 15:33:22.000000000 +0100
@@ -21,15 +21,14 @@
labels:
k8s-app: kube-dns
annotations:
- scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: docker/default
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- - key: "CriticalAddonsOnly"
- operator: "Exists"
+ # prevent evictions
+ priorityClassName: system-node-critical
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/addons/tiller/manifests/20-deployment.yaml new/salt-master/salt/addons/tiller/manifests/20-deployment.yaml
--- old/salt-master/salt/addons/tiller/manifests/20-deployment.yaml 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/addons/tiller/manifests/20-deployment.yaml 2019-02-27 15:33:22.000000000 +0100
@@ -21,15 +21,11 @@
labels:
app: helm
name: tiller
- annotations:
- scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- - key: "CriticalAddonsOnly"
- operator: "Exists"
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
@@ -42,6 +38,8 @@
operator: In
values:
- tiller
+ # prevent evictions
+ priorityClassName: system-node-critical
containers:
- env:
- name: TILLER_NAMESPACE
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/cert/init.sls new/salt-master/salt/cert/init.sls
--- old/salt-master/salt/cert/init.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/cert/init.sls 2019-02-27 15:33:22.000000000 +0100
@@ -17,7 +17,7 @@
{% set system_certs = salt.caasp_pillar.get('system_certificates', []) %}
{% for cert in system_certs %}
- {% set name, cert = cert['name'], cert['cert'] %}
+ {% set name, cert = salt.caasp_filters.basename(cert['name']), cert['cert'] %}
/etc/pki/trust/anchors/{{ name }}.crt:
file.managed:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/cni/cilium-ds.yaml.jinja new/salt-master/salt/cni/cilium-ds.yaml.jinja
--- old/salt-master/salt/cni/cilium-ds.yaml.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/cni/cilium-ds.yaml.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -21,18 +21,13 @@
k8s-app: cilium
kubernetes.io/cluster-service: "true"
annotations:
- # This annotation plus the CriticalAddonsOnly toleration makes
- # cilium to be a critical pod in the cluster, which ensures cilium
- # gets priority scheduling.
- # https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-cr...
- scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: >-
[{"key":"dedicated","operator":"Equal","value":"master","effect":"NoSchedule"}]
spec:
serviceAccountName: cilium
initContainers:
- name: install-cni-conf
- image: {{ pillar['cilium']['image'] }}
+ image: {{ salt.caasp_registry.base_image_url() }}/{{ pillar['cilium']['image'] }}
command:
- /bin/sh
- "-c"
@@ -41,7 +36,7 @@
- name: host-cni-conf
mountPath: /host/etc/cni/net.d
- name: install-cni-bin
- image: {{ pillar['cilium']['image'] }}
+ image: {{ salt.caasp_registry.base_image_url() }}/{{ pillar['cilium']['image'] }}
command:
- /bin/sh
- "-c"
@@ -50,8 +45,11 @@
- name: host-cni-bin
mountPath: /host/opt/cni/bin/
+ # prevent evictions
+ priorityClassName: system-node-critical
+
containers:
- - image: {{ pillar['cilium']['image'] }}
+ - image: {{ salt.caasp_registry.base_image_url() }}/{{ pillar['cilium']['image'] }}
imagePullPolicy: IfNotPresent
name: cilium-agent
command: [ "cilium-agent" ]
@@ -165,7 +163,4 @@
- effect: NoSchedule
key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
- # Mark cilium's pod as critical for rescheduling
- - key: CriticalAddonsOnly
- operator: "Exists"
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/cni/kube-flannel-rbac.yaml.jinja new/salt-master/salt/cni/kube-flannel-rbac.yaml.jinja
--- old/salt-master/salt/cni/kube-flannel-rbac.yaml.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/cni/kube-flannel-rbac.yaml.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -1,9 +1,63 @@
---
+apiVersion: extensions/v1beta1
+kind: PodSecurityPolicy
+metadata:
+ name: suse.caasp.psp.flannel.unprivileged
+ annotations:
+ seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
+ seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
+ apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
+ apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
+spec:
+ # Privileged
+ privileged: false
+ # Volumes and File Systems
+ volumes:
+ # Kubernetes Pseudo Volume Types
+ - configMap
+ - secret
+ - emptyDir
+ - hostPath
+ allowedHostPaths:
+ - pathPrefix: "/run/flannel"
+ - pathPrefix: "/etc/cni/net.d"
+ - pathPrefix: "/var/lib/kubelet/cni/bin"
+ readOnlyRootFilesystem: false
+ # Users and groups
+ runAsUser:
+ rule: RunAsAny
+ supplementalGroups:
+ rule: RunAsAny
+ fsGroup:
+ rule: RunAsAny
+ # Privilege Escalation
+ allowPrivilegeEscalation: false
+ defaultAllowPrivilegeEscalation: false
+ # Capabilities
+ allowedCapabilities: ['NET_ADMIN']
+ defaultAddCapabilities: []
+ requiredDropCapabilities: []
+ # Host namespaces
+ hostPID: false
+ hostIPC: false
+ hostNetwork: true
+ hostPorts:
+ - min: 0
+ max: 65535
+ # SELinux
+ seLinux:
+ # SELinux is unsed in CaaSP
+ rule: 'RunAsAny'
+---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: suse:caasp:flannel
rules:
+ - apiGroups: ['extensions']
+ resources: ['podsecuritypolicies']
+ verbs: ['use']
+ resourceNames: ['suse.caasp.psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
@@ -23,7 +77,6 @@
- nodes/status
verbs:
- patch
-
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
@@ -36,20 +89,4 @@
subjects:
- kind: ServiceAccount
name: flannel
- namespace: kube-system
-
----
-# Allow Flannel to use the suse:caasp:psp:privileged
-# PodSecurityPolicy.
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRoleBinding
-metadata:
- name: suse:caasp:psp:flannel
-roleRef:
- kind: ClusterRole
- name: suse:caasp:psp:privileged
- apiGroup: rbac.authorization.k8s.io
-subjects:
-- kind: ServiceAccount
- name: flannel
namespace: kube-system
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/cni/kube-flannel.yaml.jinja new/salt-master/salt/cni/kube-flannel.yaml.jinja
--- old/salt-master/salt/cni/kube-flannel.yaml.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/cni/kube-flannel.yaml.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -108,7 +108,9 @@
- "--healthz-ip=$(POD_IP)"
- "--healthz-port={{ pillar['flannel']['healthz_port'] }}"
securityContext:
- privileged: true
+ privileged: false
+ capabilities:
+ add: ["NET_ADMIN"]
ports:
- name: healthz
containerPort: {{ pillar['flannel']['healthz_port'] }}
@@ -141,7 +143,7 @@
fieldPath: spec.nodeName
volumeMounts:
- name: run
- mountPath: /run
+ mountPath: /run/flannel
- name: host-cni-conf
mountPath: /etc/cni/net.d
- name: flannel-plugin-config
@@ -152,16 +154,14 @@
tolerations:
# Allow the pod to run on the master. This is required for
# the master to communicate with pods.
- - key: node-role.kubernetes.io/master
- operator: Exists
+ - operator: Exists
effect: NoSchedule
- # Mark the pod as a critical add-on for rescheduling.
- - key: "CriticalAddonsOnly"
- operator: "Exists"
+ # prevent evictions
+ priorityClassName: system-node-critical
volumes:
- name: run
hostPath:
- path: /run
+ path: /run/flannel
- name: host-cni-conf
hostPath:
path: {{ pillar['cni']['dirs']['conf'] }}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/crio/crio.conf.jinja new/salt-master/salt/crio/crio.conf.jinja
--- old/salt-master/salt/crio/crio.conf.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/crio/crio.conf.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -121,7 +121,7 @@
default_transport = "docker://"
# pause_image is the image which we use to instantiate infra containers.
-pause_image = "docker.io/sles12/pause"
+pause_image = "docker.io/{{ pillar['pod_infra_container_image'] }}"
# pause_command is the command to run in a pause_image to have a container just
# sit there. If the image contains the necessary information, this value need
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/etcd/init.sls new/salt-master/salt/etcd/init.sls
--- old/salt-master/salt/etcd/init.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/etcd/init.sls 2019-02-27 15:33:22.000000000 +0100
@@ -70,6 +70,12 @@
caasp_etcd.healthy:
- watch:
- caasp_service: etcd
+ # New in version 2017.7.0.
+ - retry:
+ attempts: 10
+ until: True
+ interval: 15
+ splay: 10
/etc/sysconfig/etcd:
file.managed:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/haproxy/haproxy.yaml.jinja new/salt-master/salt/haproxy/haproxy.yaml.jinja
--- old/salt-master/salt/haproxy/haproxy.yaml.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/haproxy/haproxy.yaml.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -7,7 +7,6 @@
labels:
name: haproxy
annotations:
- scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: docker/default
spec:
restartPolicy: Always
@@ -16,8 +15,8 @@
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- - key: "CriticalAddonsOnly"
- operator: "Exists"
+ # prevent evictions
+ priorityClassName: system-node-critical
containers:
- name: haproxy
image: {{ salt.caasp_registry.base_image_url() }}/haproxy:1.6.0
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/haproxy/init.sls new/salt-master/salt/haproxy/init.sls
--- old/salt-master/salt/haproxy/init.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/haproxy/init.sls 2019-02-27 15:33:22.000000000 +0100
@@ -43,7 +43,7 @@
extra_alt_names = alt_master_names()) }}
haproxy:
- file.managed:
+ caasp_file.managed:
- name: /etc/kubernetes/manifests/haproxy.yaml
- source: salt://haproxy/haproxy.yaml.jinja
- template: jinja
@@ -51,6 +51,7 @@
- group: root
- mode: 644
- makedirs: True
+ - work_dir: /tmp
- dir_mode: 755
caasp_retriable.retry:
- name: iptables-haproxy
@@ -80,7 +81,7 @@
- namespace: kube-system
- timeout: 60
- onchanges:
- - file: haproxy
+ - caasp_file: haproxy
- file: /etc/caasp/haproxy/haproxy.cfg
{% if not salt.caasp_nodes.is_admin_node() %}
- require:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/kube-apiserver/apiserver.jinja new/salt-master/salt/kube-apiserver/apiserver.jinja
--- old/salt-master/salt/kube-apiserver/apiserver.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/kube-apiserver/apiserver.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -11,10 +11,10 @@
{%- endif %}
# The address on the local server to listen to.
-KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1 --bind-address=0.0.0.0"
+KUBE_API_ADDRESS="--bind-address=0.0.0.0"
# The port on the local server to listen on.
-KUBE_API_PORT="--insecure-port=8080 --secure-port={{ pillar['api']['int_ssl_port'] }}"
+KUBE_API_PORT="--insecure-port=0 --secure-port={{ pillar['api']['int_ssl_port'] }}"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-cafile={{ pillar['ssl']['ca_file'] }} \
@@ -54,6 +54,9 @@
--requestheader-group-headers=X-Remote-Group \
--requestheader-extra-headers-prefix=X-Remote-Extra \
--requestheader-client-ca-file={{ pillar['ssl']['ca_file'] }} \
+ --kubelet-certificate-authority={{ pillar['ssl']['ca_file'] }} \
+ --kubelet-client-certificate={{ pillar['ssl']['kube_apiserver_kubelet_client_crt'] }} \
+ --kubelet-client-key={{ pillar['ssl']['kube_apiserver_kubelet_client_key'] }} \
--proxy-client-cert-file={{ pillar['ssl']['kube_apiserver_proxy_client_crt'] }} \
--proxy-client-key-file={{ pillar['ssl']['kube_apiserver_proxy_client_key'] }} \
--storage-backend={{ pillar['api']['etcd_version'] }} \
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/kube-apiserver/init.sls new/salt-master/salt/kube-apiserver/init.sls
--- old/salt-master/salt/kube-apiserver/init.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/kube-apiserver/init.sls 2019-02-27 15:33:22.000000000 +0100
@@ -19,9 +19,16 @@
cn = grains['nodename'],
o = pillar['certificate_information']['subject_properties']['O']) }}
+{% from '_macros/certs.jinja' import certs with context %}
+{{ certs("kube-apiserver-kubelet-client",
+ pillar['ssl']['kube_apiserver_kubelet_client_crt'],
+ pillar['ssl']['kube_apiserver_kubelet_client_key'],
+ cn = grains['nodename'],
+ o = pillar['certificate_information']['subject_properties']['O']) }}
+
kube-apiserver:
caasp_retriable.retry:
- - name: iptables-kube-apiserver
+ - name: iptables-kube-apiserver
- target: iptables.append
- retry:
attempts: 2
@@ -40,8 +47,12 @@
- name: /etc/kubernetes/apiserver
- source: salt://kube-apiserver/apiserver.jinja
- template: jinja
- service.running:
- - enable: True
+ caasp_service.running_stable:
+ - name: kube-apiserver
+ - successful_retries_in_a_row: 10
+ - max_retries: 30
+ - delay_between_retries: 2
+ - enable: True
- require:
- caasp_retriable: iptables-kube-apiserver
- sls: ca-cert
@@ -59,9 +70,9 @@
/var/log/kube-apiserver:
file.directory:
- - user: kube
- - group: kube
- - dirmode: 755
+ - user: kube
+ - group: kube
+ - dirmode: 755
- filemode: 644
/etc/kubernetes/audit-policy.yaml:
@@ -92,6 +103,6 @@
- opts:
http_request_timeout: 30
- watch:
- - service: kube-apiserver
+ - caasp_service: kube-apiserver
{% endfor %}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/kube-controller-manager/controller-manager.jinja new/salt-master/salt/kube-controller-manager/controller-manager.jinja
--- old/salt-master/salt/kube-controller-manager/controller-manager.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/kube-controller-manager/controller-manager.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -18,10 +18,13 @@
--allocate-node-cidrs=true \
--node-cidr-mask-size={{ pillar['cluster_cidr_len'] }} \
--root-ca-file={{ pillar['ssl']['ca_file'] }} \
+{% if pillar['volume']['dirs']['bin'] -%}
+ --flex-volume-plugin-dir={{ pillar['volume']['dirs']['bin'] }} \
+{% endif -%}
{% if cloud_provider -%}
- --cloud-provider={{ cloud_provider }} \
+ --cloud-provider={{ cloud_provider }} \
{% if cloud_provider == 'openstack' -%}
- --cloud-config=/etc/kubernetes/openstack-config \
+ --cloud-config=/etc/kubernetes/openstack-config \
{% endif -%}
{% endif -%}
{{ managr_args }}"
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/kubelet/init.sls new/salt-master/salt/kubelet/init.sls
--- old/salt-master/salt/kubelet/init.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/kubelet/init.sls 2019-02-27 15:33:22.000000000 +0100
@@ -52,6 +52,13 @@
- dir_mode: 755
- makedirs: True
+{{ pillar['volume']['dirs']['bin'] }}:
+ file.directory:
+ - user: root
+ - group: root
+ - dir_mode: 755
+ - makedirs: True
+
kubelet-config:
file.managed:
- name: /etc/kubernetes/kubelet-config.yaml
@@ -79,6 +86,7 @@
{% endif %}
- file: {{ pillar['cni']['dirs']['bin'] }}
- file: {{ pillar['cni']['dirs']['conf'] }}
+ - file: {{ pillar['volume']['dirs']['bin'] }}
- require:
- file: /etc/kubernetes/manifests
- file: /etc/kubernetes/kubelet-initial
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/kubelet/kubelet-config.jinja new/salt-master/salt/kubelet/kubelet-config.jinja
--- old/salt-master/salt/kubelet/kubelet-config.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/kubelet/kubelet-config.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -15,7 +15,7 @@
enabled: false
cacheTTL: 2m0s
anonymous:
- enabled: true
+ enabled: false
authorization:
mode: AlwaysAllow
webhook:
@@ -47,6 +47,8 @@
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
enableControllerAttachDetach: true
+featureGates:
+ ExperimentalCriticalPodAnnotation: true
makeIPTablesUtilChains: true
iptablesMasqueradeBit: 14
iptablesDropBit: 15
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/kubelet/kubelet.jinja new/salt-master/salt/kubelet/kubelet.jinja
--- old/salt-master/salt/kubelet/kubelet.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/kubelet/kubelet.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -19,7 +19,6 @@
# Add your own!
KUBELET_ARGS="\
- --cadvisor-port=0 \
--read-only-port=0 \
--config=/etc/kubernetes/kubelet-config.yaml \
{%- if salt.caasp_pillar.get_kubelet_reserved_resources('kube') %}
@@ -53,4 +52,7 @@
--cni-bin-dir={{ pillar['cni']['dirs']['bin'] }} \
--cni-conf-dir={{ pillar['cni']['dirs']['conf'] }} \
--kubeconfig={{ pillar['paths']['kubelet_config'] }} \
- --volume-plugin-dir=/usr/lib/kubernetes/kubelet-plugins"
+{%- if pillar['volume']['dirs']['bin'] %}
+ --volume-plugin-dir={{ pillar['volume']['dirs']['bin'] }} \
+{%- endif %}
+ "
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/kubelet/stop.sls new/salt-master/salt/kubelet/stop.sls
--- old/salt-master/salt/kubelet/stop.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/kubelet/stop.sls 2019-02-27 15:33:22.000000000 +0100
@@ -10,7 +10,12 @@
drain-kubelet:
cmd.run:
- name: |
+ # give some time to drain the node, if it times out continue to ensure the update will continue
+ # this can cause downtime of applications, so ideally an application is drainable within the drain-timeout
+ # bsc#1116049
kubectl --request-timeout=1m --kubeconfig={{ pillar['paths']['kubeconfig'] }} drain {{ grains['nodename'] }} --force --delete-local-data=true --ignore-daemonsets --timeout={{ pillar['kubelet']['drain-timeout'] }}s
+ - check_cmd:
+ - /bin/true
- require:
- file: {{ pillar['paths']['kubeconfig'] }}
{%- if not node_removal_in_progress %}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/kubernetes-common/openstack-config.jinja new/salt-master/salt/kubernetes-common/openstack-config.jinja
--- old/salt-master/salt/kubernetes-common/openstack-config.jinja 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/kubernetes-common/openstack-config.jinja 2019-02-27 15:33:22.000000000 +0100
@@ -1,28 +1,28 @@
[Global]
-auth-url="{{ pillar['cloud']['openstack']['auth_url'] }}"
-username="{{ pillar['cloud']['openstack']['username'] }}"
-password="{{ pillar['cloud']['openstack']['password'] }}"
-{%- if pillar['cloud']['openstack']['project_id'] %}
-tenant-id="{{ pillar['cloud']['openstack']['project_id'] }}"
+auth-url="{{ salt.caasp_pillar.get('cloud:openstack:auth_url', '') }}"
+username="{{ salt.caasp_pillar.get('cloud:openstack:username', '') }}"
+password="{{ salt.caasp_pillar.get('cloud:openstack:password', '') }}"
+{%- if salt.caasp_pillar.get('cloud:openstack:project_id', '') %}
+tenant-id="{{ salt.caasp_pillar.get('cloud:openstack:project_id', '') }}"
{%- else %}
-tenant-name="{{ pillar['cloud']['openstack']['project'] }}"
+tenant-name="{{ salt.caasp_pillar.get('cloud:openstack:project', '') }}"
{%- endif %}
-{%- if pillar['cloud']['openstack']['domain_id'] %}
-domain-id="{{ pillar['cloud']['openstack']['domain_id'] }}"
+{%- if salt.caasp_pillar.get('cloud:openstack:domain_id', '') %}
+domain-id="{{ salt.caasp_pillar.get('cloud:openstack:domain_id', '') }}"
{%- else %}
-domain-name="{{ pillar['cloud']['openstack']['domain'] }}"
+domain-name="{{ salt.caasp_pillar.get('cloud:openstack:domain', '') }}"
{%- endif %}
-region="{{ pillar['cloud']['openstack']['region'] }}"
+region="{{ salt.caasp_pillar.get('cloud:openstack:region', '') }}"
ca-file="/etc/ssl/ca-bundle.pem"
[LoadBalancer]
lb-version=v2
-subnet-id="{{ pillar['cloud']['openstack']['subnet'] }}"
-floating-network-id="{{ pillar['cloud']['openstack']['floating'] }}"
+subnet-id="{{ salt.caasp_pillar.get('cloud:openstack:subnet', '') }}"
+floating-network-id="{{ salt.caasp_pillar.get('cloud:openstack:floating', '') }}"
create-monitor=yes
monitor-delay=1m
monitor-timeout=30s
-monitor-max-retries={{ pillar['cloud']['openstack']['lb_mon_retries'] }}
+monitor-max-retries={{ salt.caasp_pillar.get('cloud:openstack:lb_mon_retries', '') }}
[BlockStorage]
trust-device-path=false
-bs-version={{ pillar['cloud']['openstack']['bs_version'] }}
-ignore-volume-az={{ pillar['cloud']['openstack']['ignore_vol_az'] }}
+bs-version={{ salt.caasp_pillar.get('cloud:openstack:bs_version', '') }}
+ignore-volume-az={{ salt.caasp_pillar.get('cloud:openstack:ignore_vol_az', '') }}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/migrations/2-3/haproxy/init.sls new/salt-master/salt/migrations/2-3/haproxy/init.sls
--- old/salt-master/salt/migrations/2-3/haproxy/init.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/migrations/2-3/haproxy/init.sls 2019-02-27 15:33:22.000000000 +0100
@@ -29,7 +29,7 @@
extra_alt_names = alt_master_names()) }}
haproxy:
- file.managed:
+ caasp_file.managed:
- name: /etc/kubernetes/manifests/haproxy.yaml
- source: salt://migrations/2-3/haproxy/haproxy.yaml.jinja
- template: jinja
@@ -37,6 +37,7 @@
- group: root
- mode: 644
- makedirs: True
+ - work_dir: /tmp
- dir_mode: 755
- require:
- kubelet_stop
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/migrations/cri/pre-update.sls new/salt-master/salt/migrations/cri/pre-update.sls
--- old/salt-master/salt/migrations/cri/pre-update.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/migrations/cri/pre-update.sls 2019-02-27 15:33:22.000000000 +0100
@@ -14,4 +14,11 @@
cmd.script:
- source: salt://migrations/cri/clean-up-crio-pods.sh
+{% else %}
+
+{# See https://github.com/saltstack/salt/issues/14553 #}
+cni-cleanup-dummy:
+ cmd.run:
+ - name: "echo saltstack bug 14553"
+
{% endif %}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/orch/etcd-migrate.sls new/salt-master/salt/orch/etcd-migrate.sls
--- old/salt-master/salt/orch/etcd-migrate.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/orch/etcd-migrate.sls 2019-02-27 15:33:22.000000000 +0100
@@ -1,3 +1,6 @@
+{#- Make sure we start with an updated mine #}
+{%- set _ = salt.caasp_orch.sync_all() %}
+
# Generic Updates
update_pillar:
salt.function:
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/orch/force-removal.sls new/salt-master/salt/orch/force-removal.sls
--- old/salt-master/salt/orch/force-removal.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/orch/force-removal.sls 2019-02-27 15:33:22.000000000 +0100
@@ -1,4 +1,7 @@
-# must provide the node (id) to be removed in the 'target' pillar
+{#- Make sure we start with an updated mine #}
+{%- set _ = salt.caasp_orch.sync_all() %}
+
+{#- must provide the node (id) to be removed in the 'target' pillar #}
{%- set target = salt['pillar.get']('target') %}
{%- set super_master = salt.saltutil.runner('manage.up', tgt='G@roles:kube-master and not ' + target, expr_form='compound')|first %}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/orch/kubernetes.sls new/salt-master/salt/orch/kubernetes.sls
--- old/salt-master/salt/orch/kubernetes.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/orch/kubernetes.sls 2019-02-27 15:33:22.000000000 +0100
@@ -1,8 +1,11 @@
+{#- Make sure we start with an updated mine #}
+{%- set _ = salt.caasp_orch.sync_all() %}
+
{%- set default_batch = salt['pillar.get']('default_batch', 5) %}
-{%- set etcd_members = salt.saltutil.runner('mine.get', tgt='G@roles:etcd', fun='network.interfaces', tgt_type='compound').keys() %}
-{%- set masters = salt.saltutil.runner('mine.get', tgt='G@roles:kube-master', fun='network.interfaces', tgt_type='compound').keys() %}
-{%- set minions = salt.saltutil.runner('mine.get', tgt='G@roles:kube-minion', fun='network.interfaces', tgt_type='compound').keys() %}
+{%- set etcd_members = salt.caasp_nodes.get_with_expr('G@roles:etcd') %}
+{%- set masters = salt.caasp_nodes.get_with_expr('G@roles:kube-master') %}
+{%- set minions = salt.caasp_nodes.get_with_expr('G@roles:kube-minion') %}
{%- set super_master = masters|first %}
@@ -100,6 +103,7 @@
- tgt: 'roles:ca'
- tgt_type: grain
- highstate: True
+ - timeout: 120
- require:
- etc-hosts-setup
@@ -109,6 +113,7 @@
- tgt_type: grain
- sls:
- kubernetes-common.generate-serviceaccount-key
+ - timeout: 120
- require:
- ca-setup
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/orch/prepare-product-migration.sls new/salt-master/salt/orch/prepare-product-migration.sls
--- old/salt-master/salt/orch/prepare-product-migration.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/orch/prepare-product-migration.sls 2019-02-27 15:33:22.000000000 +0100
@@ -1,3 +1,6 @@
+{#- Make sure we start with an updated mine #}
+{%- set _ = salt.caasp_orch.sync_all() %}
+
{#- Get a list of nodes seem to be down or unresponsive #}
{#- This sends a "are you still there?" message to all #}
{#- the nodes and wait for a response, so it takes some time. #}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/orch/removal.sls new/salt-master/salt/orch/removal.sls
--- old/salt-master/salt/orch/removal.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/orch/removal.sls 2019-02-27 15:33:22.000000000 +0100
@@ -1,3 +1,6 @@
+{#- Make sure we start with an updated mine #}
+{%- set _ = salt.caasp_orch.sync_all() %}
+
{#- must provide the node (id) to be removed in the 'target' pillar #}
{%- set target = salt['pillar.get']('target') %}
@@ -22,9 +25,9 @@
{%- endif %}
{%- endif %}
-{%- set etcd_members = salt.saltutil.runner('mine.get', tgt='G@roles:etcd', fun='network.interfaces', tgt_type='compound').keys() %}
-{%- set masters = salt.saltutil.runner('mine.get', tgt='G@roles:kube-master', fun='network.interfaces', tgt_type='compound').keys() %}
-{%- set minions = salt.saltutil.runner('mine.get', tgt='G@roles:kube-minion', fun='network.interfaces', tgt_type='compound').keys() %}
+{%- set etcd_members = salt.caasp_nodes.get_with_expr('G@roles:etcd') %}
+{%- set masters = salt.caasp_nodes.get_with_expr('G@roles:kube-master') %}
+{%- set minions = salt.caasp_nodes.get_with_expr('G@roles:kube-minion') %}
{%- set super_master_tgt = salt.caasp_nodes.get_super_master(masters=masters,
excluded=[target] + nodes_down) %}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/orch/update-etc-hosts.sls new/salt-master/salt/orch/update-etc-hosts.sls
--- old/salt-master/salt/orch/update-etc-hosts.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/orch/update-etc-hosts.sls 2019-02-27 15:33:22.000000000 +0100
@@ -1,3 +1,6 @@
+{#- Make sure we start with an updated mine #}
+{%- set _ = salt.caasp_orch.sync_all() %}
+
{%- set updates_all_target = 'P@roles:(admin|etcd|kube-(master|minion)) and ' +
'G@bootstrap_complete:true and ' +
'not G@bootstrap_in_progress:true and ' +
@@ -5,7 +8,8 @@
'not G@removal_in_progress:true and ' +
'not G@force_removal_in_progress:true' %}
-{%- if salt.saltutil.runner('mine.get', tgt=updates_all_target, fun='nodename', tgt_type='compound')|length > 0 %}
+{%- if salt.caasp_nodes.get_with_expr(updates_all_target)|length > 0 %}
+
update_pillar:
salt.function:
- tgt: {{ updates_all_target }}
@@ -37,4 +41,5 @@
- etc-hosts
- require:
- salt: update_mine
+
{% endif %}
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' '--exclude=.svnignore' old/salt-master/salt/orch/update.sls new/salt-master/salt/orch/update.sls
--- old/salt-master/salt/orch/update.sls 2019-01-28 16:51:16.000000000 +0100
+++ new/salt-master/salt/orch/update.sls 2019-02-27 15:33:22.000000000 +0100
@@ -1,3 +1,6 @@
+{#- Make sure we start with an updated mine #}
+{%- set _ = salt.caasp_orch.sync_all() %}
+
{#- Get a list of nodes seem to be down or unresponsive #}
{#- This sends a "are you still there?" message to all #}
{#- the nodes and wait for a response, so it takes some time. #}
@@ -29,7 +32,8 @@
{%- set is_updateable_master_tgt = is_updateable_tgt + ' and ' + is_master_tgt %}
{%- set is_updateable_worker_tgt = is_updateable_tgt + ' and ' + is_worker_tgt %}
{%- set is_updateable_node_tgt = '( ' + is_updateable_master_tgt + ' ) or ( ' + is_updateable_worker_tgt + ' )' %}
-{%- set all_masters = salt.saltutil.runner('mine.get', tgt=is_master_tgt, fun='network.interfaces', tgt_type='compound').keys() %}
+
+{%- set all_masters = salt.caasp_nodes.get_with_expr(is_master_tgt) %}
{%- set super_master = all_masters|first %}
{%- set is_migration = salt['pillar.get']('migration', false) %}
@@ -194,8 +198,8 @@
- etcd-setup
# Get list of masters needing reboot
-{%- set masters = salt.saltutil.runner('mine.get', tgt=is_updateable_master_tgt, fun='network.interfaces', tgt_type='compound') %}
-{%- for master_id in masters.keys() %}
+{%- set masters = salt.caasp_nodes.get_with_expr(is_updateable_master_tgt) %}
+{%- for master_id in masters %}
# Kubelet needs other services, e.g. the cri, up + running. This provide a way
# to ensure kubelet is stopped before any other services.
@@ -305,13 +309,13 @@
- kubelet.update-post-start-services
- require:
- early-services-setup
-{%- for master_id in masters.keys() %}
+{%- for master_id in masters %}
- {{ master_id }}-start-services
{%- endfor %}
# We remove the grain when we have the last reference to using that grain.
# Otherwise an incomplete subset of minions might be targeted.
-{%- for master_id in masters.keys() %}
+{%- for master_id in masters %}
{{ master_id }}-reboot-needed-grain:
salt.function:
- tgt: '{{ master_id }}'
@@ -345,13 +349,13 @@
- migrations.2-3.haproxy
- require:
- all-masters-post-start-services
-{%- for master_id in masters.keys() %}
+{%- for master_id in masters %}
- {{ master_id }}-reboot-needed-grain
{%- endfor %}
# END NOTE: Remove me for 4.0
-{%- set workers = salt.saltutil.runner('mine.get', tgt=is_updateable_worker_tgt, fun='network.interfaces', tgt_type='compound') %}
-{%- for worker_id, ip in workers.items() %}
+{%- set workers = salt.caasp_nodes.get_with_expr(is_updateable_worker_tgt) %}
+{%- for worker_id in workers %}
# Call the node clean shutdown script
# Kubelet needs other services, e.g. the cri, up + running. This provide a way
@@ -365,7 +369,7 @@
- require:
- all-workers-2.0-pre-clean-shutdown
# wait until all the masters have been updated
-{%- for master_id in masters.keys() %}
+{%- for master_id in masters %}
- {{ master_id }}-reboot-needed-grain
{%- endfor %}
@@ -504,13 +508,13 @@
- require:
- all-masters-post-start-services
# wait until all the machines in the cluster have been upgraded
-{%- for master_id in masters.keys() %}
+{%- for master_id in masters %}
# We use the last state within the masters loop, which is different
# on masters and minions.
- {{ master_id }}-reboot-needed-grain
{%- endfor %}
{%- if not is_migration %}
-{%- for worker_id in workers.keys() %}
+{%- for worker_id in workers %}
- {{ worker_id }}-remove-progress-grain
{%- endfor %}
{% endif %}