openSUSE Commits
Threads by month
- ----- 2025 -----
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2007 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2006 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
September 2020
- 1 participants
- 2706 discussions
Hello community,
here is the log from the commit of package 00Meta for openSUSE:Leap:15.2:Images checked in at 2020-09-01 17:03:07
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.2:Images/00Meta (Old)
and /work/SRC/openSUSE:Leap:15.2:Images/.00Meta.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "00Meta"
Tue Sep 1 17:03:07 2020 rev:490 rq: version:unknown
Changes:
--------
New Changes file:
NO CHANGES FILE!!!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ version_totest ++++++
--- /var/tmp/diff_new_pack.EZIQPF/_old 2020-09-01 17:03:09.283576761 +0200
+++ /var/tmp/diff_new_pack.EZIQPF/_new 2020-09-01 17:03:09.283576761 +0200
@@ -1 +1 @@
-31.157
\ No newline at end of file
+31.158
\ No newline at end of file
1
0
Hello community,
here is the log from the commit of package 00Meta for openSUSE:Leap:15.2:Images checked in at 2020-09-01 16:45:33
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.2:Images/00Meta (Old)
and /work/SRC/openSUSE:Leap:15.2:Images/.00Meta.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "00Meta"
Tue Sep 1 16:45:33 2020 rev:489 rq: version:unknown
Changes:
--------
New Changes file:
NO CHANGES FILE!!!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ version_snapshot ++++++
--- /var/tmp/diff_new_pack.wscPWm/_old 2020-09-01 16:45:34.967221903 +0200
+++ /var/tmp/diff_new_pack.wscPWm/_new 2020-09-01 16:45:34.971221904 +0200
@@ -1 +1 @@
-31.155
\ No newline at end of file
+31.157
\ No newline at end of file
1
0
Hello community,
here is the log from the commit of package 00Meta for openSUSE:Leap:15.1:Images checked in at 2020-09-01 13:30:33
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.1:Images/00Meta (Old)
and /work/SRC/openSUSE:Leap:15.1:Images/.00Meta.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "00Meta"
Tue Sep 1 13:30:33 2020 rev:485 rq: version:unknown
Changes:
--------
New Changes file:
NO CHANGES FILE!!!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ version_totest ++++++
--- /var/tmp/diff_new_pack.t6c4Rl/_old 2020-09-01 13:30:35.130076676 +0200
+++ /var/tmp/diff_new_pack.t6c4Rl/_new 2020-09-01 13:30:35.130076676 +0200
@@ -1 +1 @@
-8.12.111
\ No newline at end of file
+8.12.112
\ No newline at end of file
1
0
Hello community,
here is the log from the commit of package 00Meta for openSUSE:Leap:15.2:Images checked in at 2020-09-01 13:00:35
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.2:Images/00Meta (Old)
and /work/SRC/openSUSE:Leap:15.2:Images/.00Meta.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "00Meta"
Tue Sep 1 13:00:35 2020 rev:488 rq: version:unknown
Changes:
--------
New Changes file:
NO CHANGES FILE!!!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ version_totest ++++++
--- /var/tmp/diff_new_pack.xEDQiw/_old 2020-09-01 13:00:37.489303034 +0200
+++ /var/tmp/diff_new_pack.xEDQiw/_new 2020-09-01 13:00:37.489303034 +0200
@@ -1 +1 @@
-31.156
\ No newline at end of file
+31.157
\ No newline at end of file
1
0
Hello community,
here is the log from the commit of package lxd for openSUSE:Leap:15.2:Update checked in at 2020-09-01 12:34:38
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.2:Update/lxd (Old)
and /work/SRC/openSUSE:Leap:15.2:Update/.lxd.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "lxd"
Tue Sep 1 12:34:38 2020 rev:3 rq:830443 version:unknown
Changes:
--------
New Changes file:
NO CHANGES FILE!!!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ _link ++++++
--- /var/tmp/diff_new_pack.aXBHjR/_old 2020-09-01 12:34:53.776342882 +0200
+++ /var/tmp/diff_new_pack.aXBHjR/_new 2020-09-01 12:34:53.776342882 +0200
@@ -1 +1 @@
-<link package='lxd.13587' cicount='copy' />
+<link package='lxd.13821' cicount='copy' />
1
0
Hello community,
here is the log from the commit of package lxd for openSUSE:Leap:15.1:Update checked in at 2020-09-01 12:34:07
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.1:Update/lxd (Old)
and /work/SRC/openSUSE:Leap:15.1:Update/.lxd.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "lxd"
Tue Sep 1 12:34:07 2020 rev:15 rq:830442 version:unknown
Changes:
--------
New Changes file:
NO CHANGES FILE!!!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ _link ++++++
--- /var/tmp/diff_new_pack.kRUrS0/_old 2020-09-01 12:34:50.512341489 +0200
+++ /var/tmp/diff_new_pack.kRUrS0/_new 2020-09-01 12:34:50.512341489 +0200
@@ -1 +1 @@
-<link package='lxd.13586' cicount='copy' />
+<link package='lxd.13820' cicount='copy' />
1
0
Hello community,
here is the log from the commit of package e2fsprogs for openSUSE:Leap:15.1:Update checked in at 2020-09-01 12:32:54
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.1:Update/e2fsprogs (Old)
and /work/SRC/openSUSE:Leap:15.1:Update/.e2fsprogs.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "e2fsprogs"
Tue Sep 1 12:32:54 2020 rev:6 rq:830440 version:unknown
Changes:
--------
New Changes file:
NO CHANGES FILE!!!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ _link ++++++
--- /var/tmp/diff_new_pack.BVoXvx/_old 2020-09-01 12:34:14.172325985 +0200
+++ /var/tmp/diff_new_pack.BVoXvx/_new 2020-09-01 12:34:14.172325985 +0200
@@ -1 +1 @@
-<link package='e2fsprogs.12288' cicount='copy' />
+<link package='e2fsprogs.13818' cicount='copy' />
1
0
Hello community,
here is the log from the commit of package salt for openSUSE:Leap:15.2:Update checked in at 2020-09-01 12:31:31
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.2:Update/salt (Old)
and /work/SRC/openSUSE:Leap:15.2:Update/.salt.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "salt"
Tue Sep 1 12:31:31 2020 rev:3 rq:830439 version:unknown
Changes:
--------
New Changes file:
NO CHANGES FILE!!!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ _link ++++++
--- /var/tmp/diff_new_pack.Z7HFcK/_old 2020-09-01 12:34:11.452324825 +0200
+++ /var/tmp/diff_new_pack.Z7HFcK/_new 2020-09-01 12:34:11.452324825 +0200
@@ -1 +1 @@
-<link package='salt.13425' cicount='copy' />
+<link package='salt.13816' cicount='copy' />
1
0
Hello community,
here is the log from the commit of package salt.13816 for openSUSE:Leap:15.2:Update checked in at 2020-09-01 12:31:24
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.2:Update/salt.13816 (Old)
and /work/SRC/openSUSE:Leap:15.2:Update/.salt.13816.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "salt.13816"
Tue Sep 1 12:31:24 2020 rev:1 rq:830439 version:3000
Changes:
--------
New Changes file:
--- /dev/null 2020-08-06 00:20:10.149648038 +0200
+++ /work/SRC/openSUSE:Leap:15.2:Update/.salt.13816.new.3399/salt.changes 2020-09-01 12:34:01.240320468 +0200
@@ -0,0 +1,4860 @@
+-------------------------------------------------------------------
+Wed Aug 12 14:15:09 UTC 2020 - Pablo Suárez Hernández <pablo.suarezhernandez(a)suse.com>
+
+- Require /usr/bin/python instead of /bin/python for RHEL-family (bsc#1173936)
+- Don't install SuSEfirewall2 service files in Factory
+- Fix __mount_device wrapper to accept separate args and kwargs
+- Fix the registration of libvirt pool and nodedev events
+- Accept nested namespaces in spacewalk.api runner function. (bsc#1172211)
+- info_installed works without status attr now (bsc#1171461)
+
+- Added:
+ * info_installed-works-without-status-attr-now.patch
+ * fix-__mount_device-wrapper-253.patch
+ * opensuse-3000-libvirt-engine-fixes-248.patch
+ * opensuse-3000-spacewalk-runner-parse-command-247.patch
+
+-------------------------------------------------------------------
+Thu Jul 16 08:23:32 UTC 2020 - Jochen Breuer <jbreuer(a)suse.de>
+
+- Fix for TypeError in Tornado importer (bsc#1174165)
+
+- Added:
+ * fix-type-error-in-tornadoimporter.patch
+
+-------------------------------------------------------------------
+Thu Jun 18 15:10:34 UTC 2020 - Pablo Suárez Hernández <pablo.suarezhernandez(a)suse.com>
+
+- Require python3-distro only for TW (bsc#1173072)
+
+-------------------------------------------------------------------
+Thu Jun 11 11:39:11 UTC 2020 - Pablo Suárez Hernández <pablo.suarezhernandez(a)suse.com>
+
+- Various virt backports from 3000.2
+
+- Added:
+ * opensuse-3000.2-virt-backports-236.patch
+
+-------------------------------------------------------------------
+Mon Jun 8 09:31:23 UTC 2020 - Pablo Suárez Hernández <pablo.suarezhernandez(a)suse.com>
+
+- Avoid traceback on debug logging for swarm module (bsc#1172075)
+- Add publish_batch to ClearFuncs exposed methods
+- zypperpkg: filter patterns that start with dot (bsc#1171906)
+- Batch mode now also correctly provides return value (bsc#1168340)
+- Add docker.logout to docker execution module (bsc#1165572)
+- Testsuite fix
+- Add option to enable/disable force refresh for zypper
+- Python3.8 compatibility changes
+- Prevent sporious "salt-api" stuck processes when managing SSH minions because of logging deadlock (bsc#1159284)
+- Avoid segfault from "salt-api" under certain conditions of heavy load managing SSH minions (bsc#1169604)
+- Revert broken changes to slspath made on Salt 3000 (saltstack/salt#56341) (bsc#1170104)
+- Returns a the list of IPs filtered by the optional network list
+
+- Added:
+ * option-to-en-disable-force-refresh-in-zypper-215.patch
+ * zypperpkg-filter-patterns-that-start-with-dot-243.patch
+ * prevent-logging-deadlock-on-salt-api-subprocesses-bs.patch
+ * revert-changes-to-slspath-saltstack-salt-56341.patch
+ * fix-for-return-value-ret-vs-return-in-batch-mode.patch
+ * add-docker-logout-237.patch
+ * add-ip-filtering-by-network.patch
+ * make-lazyloader.__init__-call-to-_refresh_file_mappi.patch
+ * add-publish_batch-to-clearfuncs-exposed-methods.patch
+ * python3.8-compatibility-pr-s-235.patch
+ * fix-a-test-and-some-variable-names-229.patch
+ * avoid-has_docker-true-if-import-messes-with-salt.uti.patch
+
+-------------------------------------------------------------------
+Thu May 7 09:22:53 UTC 2020 - Pablo Suárez Hernández <pablo.suarezhernandez(a)suse.com>
+
+- Fix CVE-2020-11651 and CVE-2020-11652 (bsc#1170595)
+- Do not require vendored backports-abc (bsc#1170288)
+- Fix partition.mkpart to work without fstype (bsc#1169800)
+
+- Added:
+ * fixed-bug-lvm-has-no-parttion-type.-the-scipt-later-.patch
+ * remove-vendored-backports-abc-from-requirements.patch
+ * fix-cve-2020-11651-and-fix-cve-2020-11652.patch
+
+-------------------------------------------------------------------
+Tue Apr 7 10:38:57 UTC 2020 - Pablo Suárez Hernández <pablo.suarezhernandez(a)suse.com>
+
+- Update to Salt version 3000
+ See release notes: https://docs.saltstack.com/en/latest/topics/releases/3000.html
+
+- Do not make file.recurse state to fail when msgpack 0.5.4 (bsc#1167437)
+- Fixes status attribute issue in aptpkg test
+- Make setup.py script not to require setuptools greater than 9.1
+ loop: fix variable names for until_no_eval
+- Drop conflictive module.run state patch (bsc#1167437)
+- Update patches after rebase with upstream v3000 tag (bsc#1167437)
+- Fix some requirements issues depending on Python3 versions
+- Removes obsolete patch
+- Fix for low rpm_lowpkg unit test
+- Add python-singledispatch as dependency for python2-salt
+- Fix for temp folder definition in loader unit test
+- Make "salt.ext.tornado.gen" to use "salt.ext.backports_abc" on Python 2
+- Fix regression in service states with reload argument
+- Fix integration test failure for test_mod_del_repo_multiline_values
+- Fix for unless requisite when pip is not installed
+- Fix errors from unit tests due NO_MOCK and NO_MOCK_REASON deprecation
+- Fix tornado imports and missing _utils after rebasing patches
+- Removes unresolved merge conflict in yumpkg module
+
+- Added:
+ * make-setup.py-script-to-not-require-setuptools-9.1.patch
+ * opensuse-3000-virt-defined-states-222.patch
+ * fix-for-unless-requisite-when-pip-is-not-installed.patch
+ * fix-typo-on-msgpack-version-when-sanitizing-msgpack-.patch
+ * fix-regression-in-service-states-with-reload-argumen.patch
+ * batch_async-avoid-using-fnmatch-to-match-event-217.patch
+ * make-salt.ext.tornado.gen-to-use-salt.ext.backports_.patch
+ * virt._get_domain-don-t-raise-an-exception-if-there-i.patch
+ * loop-fix-variable-names-for-until_no_eval.patch
+ * removes-unresolved-merge-conflict-in-yumpkg-module.patch
+ * add-missing-_utils-at-loader-grains_func.patch
+ * changed-imports-to-vendored-tornado.patch
+ * sanitize-grains-loaded-from-roster_grains.json.patch
+ * fix-for-temp-folder-definition-in-loader-unit-test.patch
+ * remove-deprecated-usage-of-no_mock-and-no_mock_reaso.patch
+ * reintroducing-reverted-changes.patch
+ * adds-explicit-type-cast-for-port.patch
+ * fix-wrong-test_mod_del_repo_multiline_values-test-af.patch
+ * re-adding-function-to-test-for-root.patch
+
+- Modified:
+ * move-server_id-deprecation-warning-to-reduce-log-spa.patch
+ * let-salt-ssh-use-platform-python-binary-in-rhel8-191.patch
+ * strip-trailing-from-repo.uri-when-comparing-repos-in.patch
+ * prevent-test_mod_del_repo_multiline_values-to-fail.patch
+ * prevent-ansiblegate-unit-tests-to-fail-on-ubuntu.patch
+ * remove-arch-from-name-when-pkg.list_pkgs-is-called-w.patch
+ * async-batch-implementation.patch
+ * add-hold-unhold-functions.patch
+ * add-all_versions-parameter-to-include-all-installed-.patch
+ * enable-passing-a-unix_socket-for-mysql-returners-bsc.patch
+ * fix-for-log-checking-in-x509-test.patch
+ * fix-zypper.list_pkgs-to-be-aligned-with-pkg-state.patch
+ * add-multi-file-support-and-globbing-to-the-filetree-.patch
+ * remove-unnecessary-yield-causing-badyielderror-bsc-1.patch
+ * fix-bsc-1065792.patch
+ * use-threadpool-from-multiprocessing.pool-to-avoid-le.patch
+ * return-the-expected-powerpc-os-arch-bsc-1117995.patch
+ * fixes-cve-2018-15750-cve-2018-15751.patch
+ * add-cpe_name-for-osversion-grain-parsing-u-49946.patch
+ * fix-failing-unit-tests-for-batch-async.patch
+ * decide-if-the-source-should-be-actually-skipped.patch
+ * allow-passing-kwargs-to-pkg.list_downloaded-bsc-1140.patch
+ * add-batch_presence_ping_timeout-and-batch_presence_p.patch
+ * run-salt-master-as-dedicated-salt-user.patch
+ * use-current-ioloop-for-the-localclient-instance-of-b.patch
+ * integration-of-msi-authentication-with-azurearm-clou.patch
+ * temporary-fix-extend-the-whitelist-of-allowed-comman.patch
+ * improve-batch_async-to-release-consumed-memory-bsc-1.patch
+ * fix-unit-test-for-grains-core.patch
+ * add-supportconfig-module-for-remote-calls-and-saltss.patch
+ * avoid-excessive-syslogging-by-watchdog-cronjob-58.patch
+ * debian-info_installed-compatibility-50453.patch
+ * include-aliases-in-the-fqdns-grains.patch
+ * implement-network.fqdns-module-function-bsc-1134860-.patch
+ * fix-async-batch-multiple-done-events.patch
+ * support-config-non-root-permission-issues-fixes-u-50.patch
+ * fix-zypper-pkg.list_pkgs-expectation-and-dpkg-mockin.patch
+ * activate-all-beacons-sources-config-pillar-grains.patch
+ * avoid-traceback-when-http.query-request-cannot-be-pe.patch
+ * fix-aptpkg-systemd-call-bsc-1143301.patch
+ * use-adler32-algorithm-to-compute-string-checksums.patch
+ * do-not-break-repo-files-with-multiple-line-values-on.patch
+ * fix-batch_async-obsolete-test.patch
+ * provide-the-missing-features-required-for-yomi-yet-o.patch
+ * fall-back-to-pymysql.patch
+ * xfs-do-not-fails-if-type-is-not-present.patch
+ * restore-default-behaviour-of-pkg-list-return.patch
+ * add-missing-fun-for-returns-from-wfunc-executions.patch
+ * virt-adding-kernel-boot-parameters-to-libvirt-xml-55.patch
+ * run-salt-api-as-user-salt-bsc-1064520.patch
+ * loosen-azure-sdk-dependencies-in-azurearm-cloud-driv.patch
+ * support-for-btrfs-and-xfs-in-parted-and-mkfs.patch
+ * fixing-streamclosed-issue.patch
+ * do-not-crash-when-there-are-ipv6-established-connect.patch
+ * calculate-fqdns-in-parallel-to-avoid-blockings-bsc-1.patch
+ * fix-async-batch-race-conditions.patch
+ * fix-issue-2068-test.patch
+ * fix-a-wrong-rebase-in-test_core.py-180.patch
+ * fix-for-suse-expanded-support-detection.patch
+ * add-environment-variable-to-know-if-yum-is-invoked-f.patch
+ * add-standalone-configuration-file-for-enabling-packa.patch
+ * switch-firewalld-state-to-use-change_interface.patch
+ * do-not-make-ansiblegate-to-crash-on-python3-minions.patch
+ * make-aptpkg.list_repos-compatible-on-enabled-disable.patch
+ * add-custom-suse-capabilities-as-grains.patch
+ * accumulated-changes-from-yomi-167.patch
+ * get-os_arch-also-without-rpm-package-installed.patch
+ * fix-git_pillar-merging-across-multiple-__env__-repos.patch
+ * do-not-load-pip-state-if-there-is-no-3rd-party-depen.patch
+ * add-saltssh-multi-version-support-across-python-inte.patch
+ * early-feature-support-config.patch
++++ 4663 more lines (skipped)
++++ between /dev/null
++++ and /work/SRC/openSUSE:Leap:15.2:Update/.salt.13816.new.3399/salt.changes
New:
----
README.SUSE
_lastrevision
_service
accumulated-changes-from-yomi-167.patch
accumulated-changes-required-for-yomi-165.patch
activate-all-beacons-sources-config-pillar-grains.patch
add-all_versions-parameter-to-include-all-installed-.patch
add-astra-linux-common-edition-to-the-os-family-list.patch
add-batch_presence_ping_timeout-and-batch_presence_p.patch
add-cpe_name-for-osversion-grain-parsing-u-49946.patch
add-custom-suse-capabilities-as-grains.patch
add-docker-logout-237.patch
add-environment-variable-to-know-if-yum-is-invoked-f.patch
add-hold-unhold-functions.patch
add-ip-filtering-by-network.patch
add-missing-_utils-at-loader-grains_func.patch
add-missing-fun-for-returns-from-wfunc-executions.patch
add-multi-file-support-and-globbing-to-the-filetree-.patch
add-new-custom-suse-capability-for-saltutil-state-mo.patch
add-publish_batch-to-clearfuncs-exposed-methods.patch
add-saltssh-multi-version-support-across-python-inte.patch
add-standalone-configuration-file-for-enabling-packa.patch
add-supportconfig-module-for-remote-calls-and-saltss.patch
add-virt.all_capabilities.patch
adds-explicit-type-cast-for-port.patch
allow-passing-kwargs-to-pkg.list_downloaded-bsc-1140.patch
apply-patch-from-upstream-to-support-python-3.8.patch
async-batch-implementation.patch
avoid-excessive-syslogging-by-watchdog-cronjob-58.patch
avoid-has_docker-true-if-import-messes-with-salt.uti.patch
avoid-traceback-when-http.query-request-cannot-be-pe.patch
batch-async-catch-exceptions-and-safety-unregister-a.patch
batch.py-avoid-exception-when-minion-does-not-respon.patch
batch_async-avoid-using-fnmatch-to-match-event-217.patch
calculate-fqdns-in-parallel-to-avoid-blockings-bsc-1.patch
changed-imports-to-vendored-tornado.patch
debian-info_installed-compatibility-50453.patch
decide-if-the-source-should-be-actually-skipped.patch
do-not-break-repo-files-with-multiple-line-values-on.patch
do-not-crash-when-there-are-ipv6-established-connect.patch
do-not-load-pip-state-if-there-is-no-3rd-party-depen.patch
do-not-make-ansiblegate-to-crash-on-python3-minions.patch
do-not-report-patches-as-installed-when-not-all-the-.patch
don-t-call-zypper-with-more-than-one-no-refresh.patch
early-feature-support-config.patch
enable-passing-a-unix_socket-for-mysql-returners-bsc.patch
fall-back-to-pymysql.patch
fix-__mount_device-wrapper-253.patch
fix-a-test-and-some-variable-names-229.patch
fix-a-wrong-rebase-in-test_core.py-180.patch
fix-aptpkg-systemd-call-bsc-1143301.patch
fix-async-batch-multiple-done-events.patch
fix-async-batch-race-conditions.patch
fix-batch_async-obsolete-test.patch
fix-bsc-1065792.patch
fix-cve-2020-11651-and-fix-cve-2020-11652.patch
fix-failing-unit-tests-for-batch-async.patch
fix-for-log-checking-in-x509-test.patch
fix-for-return-value-ret-vs-return-in-batch-mode.patch
fix-for-suse-expanded-support-detection.patch
fix-for-temp-folder-definition-in-loader-unit-test.patch
fix-for-unless-requisite-when-pip-is-not-installed.patch
fix-git_pillar-merging-across-multiple-__env__-repos.patch
fix-ipv6-scope-bsc-1108557.patch
fix-issue-2068-test.patch
fix-memory-leak-produced-by-batch-async-find_jobs-me.patch
fix-regression-in-service-states-with-reload-argumen.patch
fix-type-error-in-tornadoimporter.patch
fix-typo-on-msgpack-version-when-sanitizing-msgpack-.patch
fix-unit-test-for-grains-core.patch
fix-unit-tests-for-batch-async-after-refactor.patch
fix-wrong-test_mod_del_repo_multiline_values-test-af.patch
fix-zypper-pkg.list_pkgs-expectation-and-dpkg-mockin.patch
fix-zypper.list_pkgs-to-be-aligned-with-pkg-state.patch
fixed-bug-lvm-has-no-parttion-type.-the-scipt-later-.patch
fixes-cve-2018-15750-cve-2018-15751.patch
fixing-streamclosed-issue.patch
get-os_arch-also-without-rpm-package-installed.patch
html.tar.bz2
implement-network.fqdns-module-function-bsc-1134860-.patch
improve-batch_async-to-release-consumed-memory-bsc-1.patch
include-aliases-in-the-fqdns-grains.patch
info_installed-works-without-status-attr-now.patch
integration-of-msi-authentication-with-azurearm-clou.patch
let-salt-ssh-use-platform-python-binary-in-rhel8-191.patch
loader-invalidate-the-import-cachefor-extra-modules.patch
loop-fix-variable-names-for-until_no_eval.patch
loosen-azure-sdk-dependencies-in-azurearm-cloud-driv.patch
make-aptpkg.list_repos-compatible-on-enabled-disable.patch
make-lazyloader.__init__-call-to-_refresh_file_mappi.patch
make-profiles-a-package.patch
make-salt.ext.tornado.gen-to-use-salt.ext.backports_.patch
make-setup.py-script-to-not-require-setuptools-9.1.patch
move-server_id-deprecation-warning-to-reduce-log-spa.patch
opensuse-3000-libvirt-engine-fixes-248.patch
opensuse-3000-spacewalk-runner-parse-command-247.patch
opensuse-3000-virt-defined-states-222.patch
opensuse-3000.2-virt-backports-236.patch
option-to-en-disable-force-refresh-in-zypper-215.patch
prevent-ansiblegate-unit-tests-to-fail-on-ubuntu.patch
prevent-logging-deadlock-on-salt-api-subprocesses-bs.patch
prevent-systemd-run-description-issue-when-running-a.patch
prevent-test_mod_del_repo_multiline_values-to-fail.patch
provide-the-missing-features-required-for-yomi-yet-o.patch
python3.8-compatibility-pr-s-235.patch
re-adding-function-to-test-for-root.patch
read-repo-info-without-using-interpolation-bsc-11356.patch
reintroducing-reverted-changes.patch
remove-arch-from-name-when-pkg.list_pkgs-is-called-w.patch
remove-deprecated-usage-of-no_mock-and-no_mock_reaso.patch
remove-unnecessary-yield-causing-badyielderror-bsc-1.patch
remove-vendored-backports-abc-from-requirements.patch
removes-unresolved-merge-conflict-in-yumpkg-module.patch
restore-default-behaviour-of-pkg-list-return.patch
return-the-expected-powerpc-os-arch-bsc-1117995.patch
revert-changes-to-slspath-saltstack-salt-56341.patch
run-salt-api-as-user-salt-bsc-1064520.patch
run-salt-master-as-dedicated-salt-user.patch
salt-tmpfiles.d
salt.changes
salt.spec
sanitize-grains-loaded-from-roster_grains.json.patch
strip-trailing-from-repo.uri-when-comparing-repos-in.patch
support-config-non-root-permission-issues-fixes-u-50.patch
support-for-btrfs-and-xfs-in-parted-and-mkfs.patch
switch-firewalld-state-to-use-change_interface.patch
temporary-fix-extend-the-whitelist-of-allowed-comman.patch
travis.yml
update-documentation.sh
use-adler32-algorithm-to-compute-string-checksums.patch
use-current-ioloop-for-the-localclient-instance-of-b.patch
use-full-option-name-instead-of-undocumented-abbrevi.patch
use-threadpool-from-multiprocessing.pool-to-avoid-le.patch
v3000.tar.gz
virt-adding-kernel-boot-parameters-to-libvirt-xml-55.patch
virt._get_domain-don-t-raise-an-exception-if-there-i.patch
x509-fixes-111.patch
xfs-do-not-fails-if-type-is-not-present.patch
zypperpkg-filter-patterns-that-start-with-dot-243.patch
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ salt.spec ++++++
++++ 1720 lines (skipped)
++++++ README.SUSE ++++++
Salt-master as non-root user
============================
With this version of salt the salt-master will run as salt user.
Why an extra user
=================
While the current setup runs the master as root user, this is considered a security issue
and not in line with the other configuration management tools (eg. puppet) which runs as a
dedicated user.
How can I undo the change
=========================
If you would like to make the change before you can do the following steps manually:
1. change the user parameter in the master configuration
user: root
2. update the file permissions:
as root: chown -R root /etc/salt /var/cache/salt /var/log/salt /var/run/salt
3. restart the salt-master daemon:
as root: rcsalt-master restart or systemctl restart salt-master
NOTE
====
Running the salt-master daemon as a root user is considers by some a security risk, but
running as root, enables the pam external auth system, as this system needs root access to check authentication.
For more information:
http://docs.saltstack.com/en/latest/ref/configuration/nonroot.html++++++ _lastrevision ++++++
185f2b89a08da805af6408aa2d8e7077f5121836++++++ _service ++++++
<services>
<service name="tar_scm" mode="disabled">
<param name="url">https://github.com/openSUSE/salt-packaging.git</param>
<param name="subdir">salt</param>
<param name="filename">package</param>
<param name="revision">3000</param>
<param name="scm">git</param>
</service>
<service name="extract_file" mode="disabled">
<param name="archive">*package*.tar</param>
<param name="files">*/*</param>
</service>
<service name="download_url" mode="disabled">
<param name="host">codeload.github.com</param>
<param name="path">openSUSE/salt/tar.gz/v3000-suse</param>
<param name="filename">v3000.tar.gz</param>
</service>
<service name="update_changelog" mode="disabled"></service>
</services>
++++++ accumulated-changes-from-yomi-167.patch ++++++
>From 63f28a891449889fa3d7139470266162b10e88f2 Mon Sep 17 00:00:00 2001
From: Alberto Planas <aplanas(a)gmail.com>
Date: Tue, 22 Oct 2019 11:02:33 +0200
Subject: [PATCH] Accumulated changes from Yomi (#167)
* core.py: ignore wrong product_name files
Some firmwares (like some NUC machines), do not provide valid
/sys/class/dmi/id/product_name strings. In those cases an
UnicodeDecodeError exception happens.
This patch ignore this kind of issue during the grains creation.
(cherry picked from commit 2d57d2a6063488ad9329a083219e3826e945aa2d)
* zypperpkg: understand product type
(cherry picked from commit b865491b74679140f7a71c5ba50d482db47b600f)
---
salt/grains/core.py | 4 +++
salt/modules/zypperpkg.py | 30 +++++++++++-----
tests/unit/grains/test_core.py | 68 ++++++++++++++++++++++++++++++++++++
tests/unit/modules/test_zypperpkg.py | 26 ++++++++++++++
4 files changed, 119 insertions(+), 9 deletions(-)
diff --git a/salt/grains/core.py b/salt/grains/core.py
index 77ae99590f..68c43482d3 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -997,6 +997,10 @@ def _virtual(osdata):
grains['virtual'] = 'gce'
elif 'BHYVE' in output:
grains['virtual'] = 'bhyve'
+ except UnicodeDecodeError:
+ # Some firmwares provide non-valid 'product_name'
+ # files, ignore them
+ pass
except IOError:
pass
elif osdata['kernel'] == 'FreeBSD':
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index f7158e0810..5f3b6d6855 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -863,23 +863,35 @@ def list_pkgs(versions_as_list=False, root=None, includes=None, **kwargs):
_ret[pkgname] = sorted(ret[pkgname], key=lambda d: d['version'])
for include in includes:
+ if include == 'product':
+ products = list_products(all=False, root=root)
+ for product in products:
+ extended_name = '{}:{}'.format(include, product['name'])
+ _ret[extended_name] = [{
+ 'epoch': product['epoch'],
+ 'version': product['version'],
+ 'release': product['release'],
+ 'arch': product['arch'],
+ 'install_date': None,
+ 'install_date_time_t': None,
+ }]
if include in ('pattern', 'patch'):
if include == 'pattern':
- pkgs = list_installed_patterns(root=root)
+ elements = list_installed_patterns(root=root)
elif include == 'patch':
- pkgs = list_installed_patches(root=root)
+ elements = list_installed_patches(root=root)
else:
- pkgs = []
- for pkg in pkgs:
- pkg_extended_name = '{}:{}'.format(include, pkg)
- info = info_available(pkg_extended_name,
+ elements = []
+ for element in elements:
+ extended_name = '{}:{}'.format(include, element)
+ info = info_available(extended_name,
refresh=False,
root=root)
- _ret[pkg_extended_name] = [{
+ _ret[extended_name] = [{
'epoch': None,
- 'version': info[pkg]['version'],
+ 'version': info[element]['version'],
'release': None,
- 'arch': info[pkg]['arch'],
+ 'arch': info[element]['arch'],
'install_date': None,
'install_date_time_t': None,
}]
diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py
index 60914204b0..c4731f667a 100644
--- a/tests/unit/grains/test_core.py
+++ b/tests/unit/grains/test_core.py
@@ -1543,3 +1543,71 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
self.assertIn('osfullname', os_grains)
self.assertEqual(os_grains.get('osfullname'), 'FreeBSD')
+
+ @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
+ def test_kernelparams_return(self):
+ expectations = [
+ ('BOOT_IMAGE=/vmlinuz-3.10.0-693.2.2.el7.x86_64',
+ {'kernelparams': [('BOOT_IMAGE', '/vmlinuz-3.10.0-693.2.2.el7.x86_64')]}),
+ ('root=/dev/mapper/centos_daemon-root',
+ {'kernelparams': [('root', '/dev/mapper/centos_daemon-root')]}),
+ ('rhgb quiet ro',
+ {'kernelparams': [('rhgb', None), ('quiet', None), ('ro', None)]}),
+ ('param="value1"',
+ {'kernelparams': [('param', 'value1')]}),
+ ('param="value1 value2 value3"',
+ {'kernelparams': [('param', 'value1 value2 value3')]}),
+ ('param="value1 value2 value3" LANG="pl" ro',
+ {'kernelparams': [('param', 'value1 value2 value3'), ('LANG', 'pl'), ('ro', None)]}),
+ ('ipv6.disable=1',
+ {'kernelparams': [('ipv6.disable', '1')]}),
+ ('param="value1:value2:value3"',
+ {'kernelparams': [('param', 'value1:value2:value3')]}),
+ ('param="value1,value2,value3"',
+ {'kernelparams': [('param', 'value1,value2,value3')]}),
+ ('param="value1" param="value2" param="value3"',
+ {'kernelparams': [('param', 'value1'), ('param', 'value2'), ('param', 'value3')]}),
+ ]
+
+ for cmdline, expectation in expectations:
+ with patch('salt.utils.files.fopen', mock_open(read_data=cmdline)):
+ self.assertEqual(core.kernelparams(), expectation)
+
+ @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
+ @patch('os.path.exists')
+ @patch('salt.utils.platform.is_proxy')
+ def test__hw_data_linux_empty(self, is_proxy, exists):
+ is_proxy.return_value = False
+ exists.return_value = True
+ with patch('salt.utils.files.fopen', mock_open(read_data='')):
+ self.assertEqual(core._hw_data({'kernel': 'Linux'}), {
+ 'biosreleasedate': '',
+ 'biosversion': '',
+ 'manufacturer': '',
+ 'productname': '',
+ 'serialnumber': '',
+ 'uuid': ''
+ })
+
+ @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
+ @skipIf(six.PY2, 'UnicodeDecodeError is throw in Python 3')
+ @patch('os.path.exists')
+ @patch('salt.utils.platform.is_proxy')
+ def test__hw_data_linux_unicode_error(self, is_proxy, exists):
+ def _fopen(*args):
+ class _File(object):
+ def __enter__(self):
+ return self
+
+ def __exit__(self, *args):
+ pass
+
+ def read(self):
+ raise UnicodeDecodeError('enconding', b'', 1, 2, 'reason')
+
+ return _File()
+
+ is_proxy.return_value = False
+ exists.return_value = True
+ with patch('salt.utils.files.fopen', _fopen):
+ self.assertEqual(core._hw_data({'kernel': 'Linux'}), {})
diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py
index 6102043384..76937cc358 100644
--- a/tests/unit/modules/test_zypperpkg.py
+++ b/tests/unit/modules/test_zypperpkg.py
@@ -944,6 +944,32 @@ Repository 'DUMMY' not found by its alias, number, or URI.
with self.assertRaisesRegex(CommandExecutionError, '^Advisory id "SUSE-PATCH-XXX" not found$'):
zypper.install(advisory_ids=['SUSE-PATCH-XXX'])
+ @patch('salt.modules.zypperpkg._systemd_scope',
+ MagicMock(return_value=False))
+ @patch('salt.modules.zypperpkg.list_products',
+ MagicMock(return_value={'openSUSE': {'installed': False, 'summary': 'test'}}))
+ @patch('salt.modules.zypperpkg.list_pkgs', MagicMock(side_effect=[{"product:openSUSE": "15.2"},
+ {"product:openSUSE": "15.3"}]))
+ def test_install_product_ok(self):
+ '''
+ Test successfully product installation.
+ '''
+ with patch.dict(zypper.__salt__,
+ {
+ 'pkg_resource.parse_targets': MagicMock(
+ return_value=(['product:openSUSE'], None))
+ }):
+ with patch('salt.modules.zypperpkg.__zypper__.noraise.call', MagicMock()) as zypper_mock:
+ ret = zypper.install('product:openSUSE', includes=['product'])
+ zypper_mock.assert_called_once_with(
+ '--no-refresh',
+ 'install',
+ '--auto-agree-with-licenses',
+ '--name',
+ 'product:openSUSE'
+ )
+ self.assertDictEqual(ret, {"product:openSUSE": {"old": "15.2", "new": "15.3"}})
+
def test_remove_purge(self):
'''
Test package removal
--
2.16.4
++++++ accumulated-changes-required-for-yomi-165.patch ++++++
>From 9f29577b75cac1e79ec7c30a5dff0dff0ab9da3a Mon Sep 17 00:00:00 2001
From: Alberto Planas <aplanas(a)gmail.com>
Date: Tue, 30 Jul 2019 11:23:12 +0200
Subject: [PATCH] Accumulated changes required for Yomi (#165)
* cmdmod: fix runas and group in run_chroot
The parameters runas and group for cmdmod.run() will change the efective
user and group before executing the command. But in a chroot environment is
expected that the change happends inside the chroot, not outside, as the
user and groups are refering to objects that can only exist inside the
environment.
This patch add the userspec parameter to the chroot command, to change
the user in the correct place.
(cherry picked from commit f0434aaeeee3ace4e3fc65c04e69984f08b2541e)
* chroot: add missing sys directory
(cherry picked from commit cdf74426bcad4e8bf329bf604c77ea83bfca8b2c)
* chroot: change variable name to root
(cherry picked from commit 7f68b65b1b0f9eec2a6b07b02714ead0121f0e4b)
* chroot: fix bug in safe_kwargs iteration
(cherry picked from commit 39da1c69ea2781bed6e9d8e6879b70d65fa5a5b0)
* test_cmdmod: fix test_run_cwd_in_combination_with_runas
(cherry picked from commit 42640ecf161caf64c61e9b02927882f92c850092)
* test_cmdmod: add test_run_chroot_runas test
(cherry picked from commit d900035089a22f6741d2095fd1f6694597041a88)
* freezer: do not fail in cache dir is present
(cherry picked from commit 25137c51e6d6e53e3099b6cddbf51d4cb2c53d8d)
* freezer: clean freeze YAML profile on restore
(cherry picked from commit 56b97c997257f12038399549dc987b7723ab225f)
* zypperpkg: fix pkg.list_pkgs cache
The cache from pkg.list_pkgs for the zypper installer is too aggresive.
Some parameters will deliver different package lists, like root and
includes. The current cache do not take those parameters into
consideration, so the next time that this function is called, the last
list of packages will be returned, without checking if the current
parameters match the old one.
This patch create a different cache key for each parameter combination,
so the cached data will be separated too.
(cherry picked from commit 9c54bb3e8c93ba21fc583bdefbcadbe53cbcd7b5)
---
salt/modules/cmdmod.py | 12 +++++++++---
salt/modules/zypperpkg.py | 13 ++++++++++---
tests/unit/modules/test_cmdmod.py | 16 ++++++++++++++++
tests/unit/modules/test_zypperpkg.py | 21 +++++++++++++++++++++
4 files changed, 56 insertions(+), 6 deletions(-)
diff --git a/salt/modules/cmdmod.py b/salt/modules/cmdmod.py
index eed7656a6d..0d2f720bbb 100644
--- a/salt/modules/cmdmod.py
+++ b/salt/modules/cmdmod.py
@@ -3094,13 +3094,19 @@ def run_chroot(root,
if isinstance(cmd, (list, tuple)):
cmd = ' '.join([six.text_type(i) for i in cmd])
- cmd = 'chroot {0} {1} -c {2}'.format(root, sh_, _cmd_quote(cmd))
+
+ # If runas and group are provided, we expect that the user lives
+ # inside the chroot, not outside.
+ if runas:
+ userspec = '--userspec {}:{}'.format(runas, group if group else '')
+ else:
+ userspec = ''
+
+ cmd = 'chroot {} {} {} -c {}'.format(userspec, root, sh_, _cmd_quote(cmd))
run_func = __context__.pop('cmd.run_chroot.func', run_all)
ret = run_func(cmd,
- runas=runas,
- group=group,
cwd=cwd,
stdin=stdin,
shell=shell,
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index 3760b525e7..8179cd8c1d 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -449,8 +449,14 @@ def _clean_cache():
'''
Clean cached results
'''
+ keys = []
for cache_name in ['pkg.list_pkgs', 'pkg.list_provides']:
- __context__.pop(cache_name, None)
+ for contextkey in __context__:
+ if contextkey.startswith(cache_name):
+ keys.append(contextkey)
+
+ for key in keys:
+ __context__.pop(key, None)
def list_upgrades(refresh=True, root=None, **kwargs):
@@ -811,9 +817,10 @@ def list_pkgs(versions_as_list=False, root=None, includes=None, **kwargs):
includes = includes if includes else []
- contextkey = 'pkg.list_pkgs'
+ # Results can be different if a different root or a different
+ # inclusion types are passed
+ contextkey = 'pkg.list_pkgs_{}_{}'.format(root, includes)
- # TODO(aplanas): this cached value depends on the parameters
if contextkey not in __context__:
ret = {}
cmd = ['rpm']
diff --git a/tests/unit/modules/test_cmdmod.py b/tests/unit/modules/test_cmdmod.py
index f8fba59294..8d763435f8 100644
--- a/tests/unit/modules/test_cmdmod.py
+++ b/tests/unit/modules/test_cmdmod.py
@@ -371,6 +371,22 @@ class CMDMODTestCase(TestCase, LoaderModuleMockMixin):
else:
raise RuntimeError
+ @skipIf(salt.utils.platform.is_windows(), 'Do not run on Windows')
+ @skipIf(salt.utils.platform.is_darwin(), 'Do not run on MacOS')
+ def test_run_cwd_in_combination_with_runas(self):
+ '''
+ cmd.run executes command in the cwd directory
+ when the runas parameter is specified
+ '''
+ cmd = 'pwd'
+ cwd = '/tmp'
+ runas = os.getlogin()
+
+ with patch.dict(cmdmod.__grains__, {'os': 'Darwin',
+ 'os_family': 'Solaris'}):
+ stdout = cmdmod._run(cmd, cwd=cwd, runas=runas).get('stdout')
+ self.assertEqual(stdout, cwd)
+
def test_run_all_binary_replace(self):
'''
Test for failed decoding of binary data, for instance when doing
diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py
index 12c22bfcb2..6102043384 100644
--- a/tests/unit/modules/test_zypperpkg.py
+++ b/tests/unit/modules/test_zypperpkg.py
@@ -571,6 +571,7 @@ Repository 'DUMMY' not found by its alias, number, or URI.
patch.dict(zypper.__salt__, {'pkg_resource.stringify': MagicMock()}):
pkgs = zypper.list_pkgs(versions_as_list=True)
self.assertFalse(pkgs.get('gpg-pubkey', False))
+ self.assertTrue('pkg.list_pkgs_None_[]' in zypper.__context__)
for pkg_name, pkg_version in {
'jakarta-commons-discovery': ['0.4-129.686'],
'yast2-ftp-server': ['3.1.8-8.1'],
@@ -613,6 +614,7 @@ Repository 'DUMMY' not found by its alias, number, or URI.
patch.dict(pkg_resource.__salt__, {'pkg.parse_arch_from_name': zypper.parse_arch_from_name}):
pkgs = zypper.list_pkgs(attr=['epoch', 'release', 'arch', 'install_date_time_t'])
self.assertFalse(pkgs.get('gpg-pubkey', False))
+ self.assertTrue('pkg.list_pkgs_None_[]' in zypper.__context__)
for pkg_name, pkg_attr in {
'jakarta-commons-discovery': [{
'version': '0.4',
@@ -1456,3 +1458,22 @@ pattern() = package-c'''),
'summary': 'description b',
},
}
+
+ def test__clean_cache_empty(self):
+ '''Test that an empty cached can be cleaned'''
+ context = {}
+ with patch.dict(zypper.__context__, context):
+ zypper._clean_cache()
+ assert context == {}
+
+ def test__clean_cache_filled(self):
+ '''Test that a filled cached can be cleaned'''
+ context = {
+ 'pkg.list_pkgs_/mnt_[]': None,
+ 'pkg.list_pkgs_/mnt_[patterns]': None,
+ 'pkg.list_provides': None,
+ 'pkg.other_data': None,
+ }
+ with patch.dict(zypper.__context__, context):
+ zypper._clean_cache()
+ self.assertEqual(zypper.__context__, {'pkg.other_data': None})
--
2.16.4
++++++ activate-all-beacons-sources-config-pillar-grains.patch ++++++
>From 6df4cef549665aad5b9e2af50eb06124a2bb0997 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Tue, 17 Oct 2017 16:52:33 +0200
Subject: [PATCH] Activate all beacons sources: config/pillar/grains
---
salt/minion.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/salt/minion.py b/salt/minion.py
index 6a77d90185..457f485b0a 100644
--- a/salt/minion.py
+++ b/salt/minion.py
@@ -483,7 +483,7 @@ class MinionBase(object):
the pillar or grains changed
'''
if 'config.merge' in functions:
- b_conf = functions['config.merge']('beacons', self.opts['beacons'], omit_opts=True)
+ b_conf = functions['config.merge']('beacons', self.opts['beacons'])
if b_conf:
return self.beacons.process(b_conf, self.opts['grains']) # pylint: disable=no-member
return []
--
2.16.4
++++++ add-all_versions-parameter-to-include-all-installed-.patch ++++++
>From cd66b1e6636013440577a38a5a68729fec2f3f99 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Mon, 14 May 2018 11:33:13 +0100
Subject: [PATCH] Add "all_versions" parameter to include all installed
version on rpm.info
Enable "all_versions" parameter for zypper.info_installed
Enable "all_versions" parameter for yumpkg.info_installed
Prevent adding failed packages when pkg name contains the arch (on SUSE)
Add 'all_versions' documentation for info_installed on yum/zypper modules
Add unit tests for info_installed with all_versions
Refactor: use dict.setdefault instead if-else statement
Allow removing only specific package versions with zypper and yum
---
salt/states/pkg.py | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/salt/states/pkg.py b/salt/states/pkg.py
index a13d418400..c0fa2f6b69 100644
--- a/salt/states/pkg.py
+++ b/salt/states/pkg.py
@@ -450,6 +450,16 @@ def _find_remove_targets(name=None,
if __grains__['os'] == 'FreeBSD' and origin:
cver = [k for k, v in six.iteritems(cur_pkgs) if v['origin'] == pkgname]
+ elif __grains__['os_family'] == 'Suse':
+ # On SUSE systems. Zypper returns packages without "arch" in name
+ try:
+ namepart, archpart = pkgname.rsplit('.', 1)
+ except ValueError:
+ cver = cur_pkgs.get(pkgname, [])
+ else:
+ if archpart in salt.utils.pkg.rpm.ARCHES + ("noarch",):
+ pkgname = namepart
+ cver = cur_pkgs.get(pkgname, [])
else:
cver = cur_pkgs.get(pkgname, [])
@@ -856,6 +866,17 @@ def _verify_install(desired, new_pkgs, ignore_epoch=False, new_caps=None):
cver = new_pkgs.get(pkgname.split('%')[0])
elif __grains__['os_family'] == 'Debian':
cver = new_pkgs.get(pkgname.split('=')[0])
+ elif __grains__['os_family'] == 'Suse':
+ # On SUSE systems. Zypper returns packages without "arch" in name
+ try:
+ namepart, archpart = pkgname.rsplit('.', 1)
+ except ValueError:
+ cver = new_pkgs.get(pkgname)
+ else:
+ if archpart in salt.utils.pkg.rpm.ARCHES + ("noarch",):
+ cver = new_pkgs.get(namepart)
+ else:
+ cver = new_pkgs.get(pkgname)
else:
cver = new_pkgs.get(pkgname)
if not cver and pkgname in new_caps:
--
2.16.4
++++++ add-astra-linux-common-edition-to-the-os-family-list.patch ++++++
>From acf0b24353d831dcc2c5b292f99480938f5ecd93 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Julio=20Gonz=C3=A1lez=20Gil?=
<juliogonzalez(a)users.noreply.github.com>
Date: Wed, 12 Feb 2020 10:05:45 +0100
Subject: [PATCH] Add Astra Linux Common Edition to the OS Family list
(#209)
---
salt/grains/core.py | 1 +
tests/unit/grains/test_core.py | 20 ++++++++++++++++++++
2 files changed, 21 insertions(+)
diff --git a/salt/grains/core.py b/salt/grains/core.py
index 20950988d9..f410985198 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -1523,6 +1523,7 @@ _OS_FAMILY_MAP = {
'Funtoo': 'Gentoo',
'AIX': 'AIX',
'TurnKey': 'Debian',
+ 'AstraLinuxCE': 'Debian',
}
# Matches any possible format:
diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py
index b4ed9379e5..c276dee9f3 100644
--- a/tests/unit/grains/test_core.py
+++ b/tests/unit/grains/test_core.py
@@ -605,6 +605,26 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
}
self._run_os_grains_tests("ubuntu-17.10", _os_release_map, expectation)
+ @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
+ def test_astralinuxce_2_os_grains(self):
+ '''
+ Test if OS grains are parsed correctly in Astra Linux CE 2.12.22 "orel"
+ '''
+ _os_release_map = {
+ 'linux_distribution': ('AstraLinuxCE', '2.12.22', 'orel'),
+ }
+ expectation = {
+ 'os': 'AstraLinuxCE',
+ 'os_family': 'Debian',
+ 'oscodename': 'orel',
+ 'osfullname': 'AstraLinuxCE',
+ 'osrelease': '2.12.22',
+ 'osrelease_info': (2, 12, 22),
+ 'osmajorrelease': 2,
+ 'osfinger': 'AstraLinuxCE-2',
+ }
+ self._run_os_grains_tests("astralinuxce-2.12.22", _os_release_map, expectation)
+
@skipIf(not salt.utils.platform.is_windows(), 'System is not Windows')
def test_windows_platform_data(self):
'''
--
2.16.4
++++++ add-batch_presence_ping_timeout-and-batch_presence_p.patch ++++++
>From 376a7d2eeb6b3b215fac9322f1baee4497bdb339 Mon Sep 17 00:00:00 2001
From: Marcelo Chiaradia <mchiaradia(a)suse.com>
Date: Thu, 4 Apr 2019 13:57:38 +0200
Subject: [PATCH] Add 'batch_presence_ping_timeout' and
'batch_presence_ping_gather_job_timeout' parameters for synchronous batching
---
salt/cli/batch.py | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/salt/cli/batch.py b/salt/cli/batch.py
index 36e66da1af..67f03c8a45 100644
--- a/salt/cli/batch.py
+++ b/salt/cli/batch.py
@@ -83,6 +83,9 @@ def batch_get_opts(
if key not in opts:
opts[key] = val
+ opts['batch_presence_ping_timeout'] = kwargs.get('batch_presence_ping_timeout', opts['timeout'])
+ opts['batch_presence_ping_gather_job_timeout'] = kwargs.get('batch_presence_ping_gather_job_timeout', opts['gather_job_timeout'])
+
return opts
@@ -119,7 +122,7 @@ class Batch(object):
args = [self.opts['tgt'],
'test.ping',
[],
- self.opts['timeout'],
+ self.opts.get('batch_presence_ping_timeout', self.opts['timeout']),
]
selected_target_option = self.opts.get('selected_target_option', None)
@@ -130,7 +133,7 @@ class Batch(object):
self.pub_kwargs['yield_pub_data'] = True
ping_gen = self.local.cmd_iter(*args,
- gather_job_timeout=self.opts['gather_job_timeout'],
+ gather_job_timeout=self.opts.get('batch_presence_ping_gather_job_timeout', self.opts['gather_job_timeout']),
**self.pub_kwargs)
# Broadcast to targets
--
2.16.4
++++++ add-cpe_name-for-osversion-grain-parsing-u-49946.patch ++++++
>From a90f35bc03b477a63aae20c58f8957c075569465 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Tue, 9 Oct 2018 14:08:50 +0200
Subject: [PATCH] Add CPE_NAME for osversion* grain parsing (U#49946)
Remove unnecessary linebreak
Override VERSION_ID from os-release, if CPE_NAME is given
Add unit test for WFN format of CPE_NAME
Add unit test for v2.3 of CPE format
Add unit test for broken CPE_NAME
Prevent possible crash if CPE_NAME is wrongly written in the distro
Add part parsing
Keep CPE_NAME only for opensuse series
Remove linebreak
Expand unit test to verify part name
Fix proper part name in the string-bound CPE
---
salt/grains/core.py | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/salt/grains/core.py b/salt/grains/core.py
index 9c1b5d930e..7b7e328520 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -1642,6 +1642,34 @@ def _parse_cpe_name(cpe):
return ret
+def _parse_cpe_name(cpe):
+ '''
+ Parse CPE_NAME data from the os-release
+
+ Info: https://csrc.nist.gov/projects/security-content-automation-protocol/scap-sp…
+
+ :param cpe:
+ :return:
+ '''
+ part = {
+ 'o': 'operating system',
+ 'h': 'hardware',
+ 'a': 'application',
+ }
+ ret = {}
+ cpe = (cpe or '').split(':')
+ if len(cpe) > 4 and cpe[0] == 'cpe':
+ if cpe[1].startswith('/'): # WFN to URI
+ ret['vendor'], ret['product'], ret['version'] = cpe[2:5]
+ ret['phase'] = cpe[5] if len(cpe) > 5 else None
+ ret['part'] = part.get(cpe[1][1:])
+ elif len(cpe) == 13 and cpe[1] == '2.3': # WFN to a string
+ ret['vendor'], ret['product'], ret['version'], ret['phase'] = [x if x != '*' else None for x in cpe[3:7]]
+ ret['part'] = part.get(cpe[2])
+
+ return ret
+
+
def os_data():
'''
Return grains pertaining to the operating system
--
2.16.4
++++++ add-custom-suse-capabilities-as-grains.patch ++++++
>From e57dd3c2ae655422f0f6939825154ce5827d43c4 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Thu, 21 Jun 2018 11:57:57 +0100
Subject: [PATCH] Add custom SUSE capabilities as Grains
---
salt/grains/extra.py | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/salt/grains/extra.py b/salt/grains/extra.py
index 9ce644b766..1082b05dba 100644
--- a/salt/grains/extra.py
+++ b/salt/grains/extra.py
@@ -75,3 +75,10 @@ def config():
log.warning("Bad syntax in grains file! Skipping.")
return {}
return {}
+
+
+def suse_backported_capabilities():
+ return {
+ '__suse_reserved_pkg_all_versions_support': True,
+ '__suse_reserved_pkg_patches_support': True
+ }
--
2.16.4
++++++ add-docker-logout-237.patch ++++++
>From 0b0a0282519869364d753163e1c5267a1087e912 Mon Sep 17 00:00:00 2001
From: Alexander Graul <agraul(a)suse.com>
Date: Mon, 18 May 2020 16:39:27 +0200
Subject: [PATCH] Add docker logout (#237)
Docker logout works analog to login. It takes none, one or more registries as
arguments. If there are no arguments, all known (specified in pillar)
docker registries are logged out of. If arguments are present, they are
interpreted as a list of docker registries to log out of.
---
salt/modules/dockermod.py | 80 ++++++++++++++++++++++++++++
tests/unit/modules/test_dockermod.py | 59 ++++++++++++++++++++
2 files changed, 139 insertions(+)
diff --git a/salt/modules/dockermod.py b/salt/modules/dockermod.py
index 28a2107cec..119e9eb170 100644
--- a/salt/modules/dockermod.py
+++ b/salt/modules/dockermod.py
@@ -1481,6 +1481,86 @@ def login(*registries):
return ret
+def logout(*registries):
+ """
+ .. versionadded:: 3001
+
+ Performs a ``docker logout`` to remove the saved authentication details for
+ one or more configured repositories.
+
+ Multiple registry URLs (matching those configured in Pillar) can be passed,
+ and Salt will attempt to logout of *just* those registries. If no registry
+ URLs are provided, Salt will attempt to logout of *all* configured
+ registries.
+
+ **RETURN DATA**
+
+ A dictionary containing the following keys:
+
+ - ``Results`` - A dictionary mapping registry URLs to the authentication
+ result. ``True`` means a successful logout, ``False`` means a failed
+ logout.
+ - ``Errors`` - A list of errors encountered during the course of this
+ function.
+
+ CLI Example:
+
+ .. code-block:: bash
+
+ salt myminion docker.logout
+ salt myminion docker.logout hub
+ salt myminion docker.logout hub https://mydomain.tld/registry/
+ """
+ # NOTE: This function uses the "docker logout" CLI command to remove
+ # authentication information from config.json. docker-py does not support
+ # this usecase (see https://github.com/docker/docker-py/issues/1091)
+
+ # To logout of all known (to Salt) docker registries, they have to be collected first
+ registry_auth = __salt__["config.get"]("docker-registries", {})
+ ret = {"retcode": 0}
+ errors = ret.setdefault("Errors", [])
+ if not isinstance(registry_auth, dict):
+ errors.append("'docker-registries' Pillar value must be a dictionary")
+ registry_auth = {}
+ for reg_name, reg_conf in six.iteritems(
+ __salt__["config.option"]("*-docker-registries", wildcard=True)
+ ):
+ try:
+ registry_auth.update(reg_conf)
+ except TypeError:
+ errors.append(
+ "Docker registry '{0}' was not specified as a "
+ "dictionary".format(reg_name)
+ )
+
+ # If no registries passed, we will logout of all known registries
+ if not registries:
+ registries = list(registry_auth)
+
+ results = ret.setdefault("Results", {})
+ for registry in registries:
+ if registry not in registry_auth:
+ errors.append("No match found for registry '{0}'".format(registry))
+ continue
+ else:
+ cmd = ["docker", "logout"]
+ if registry.lower() != "hub":
+ cmd.append(registry)
+ log.debug("Attempting to logout of docker registry '%s'", registry)
+ logout_cmd = __salt__["cmd.run_all"](
+ cmd, python_shell=False, output_loglevel="quiet",
+ )
+ results[registry] = logout_cmd["retcode"] == 0
+ if not results[registry]:
+ if logout_cmd["stderr"]:
+ errors.append(logout_cmd["stderr"])
+ elif logout_cmd["stdout"]:
+ errors.append(logout_cmd["stdout"])
+ if errors:
+ ret["retcode"] = 1
+ return ret
+
+
# Functions for information gathering
def depends(name):
'''
diff --git a/tests/unit/modules/test_dockermod.py b/tests/unit/modules/test_dockermod.py
index 191bfc123f..8f4ead2867 100644
--- a/tests/unit/modules/test_dockermod.py
+++ b/tests/unit/modules/test_dockermod.py
@@ -164,6 +164,65 @@ class DockerTestCase(TestCase, LoaderModuleMockMixin):
self.assertIn('retcode', ret)
self.assertNotEqual(ret['retcode'], 0)
+ def test_logout_calls_docker_cli_logout_single(self):
+ client = Mock()
+ get_client_mock = MagicMock(return_value=client)
+ ref_out = {"stdout": "", "stderr": "", "retcode": 0}
+ registry_auth_data = {
+ "portus.example.com:5000": {
+ "username": "admin",
+ "password": "linux12345",
+ "email": "tux(a)example.com",
+ }
+ }
+ docker_mock = MagicMock(return_value=ref_out)
+ with patch.object(docker_mod, "_get_client", get_client_mock):
+ dunder_salt = {
+ "config.get": MagicMock(return_value=registry_auth_data),
+ "cmd.run_all": docker_mock,
+ "config.option": MagicMock(return_value={}),
+ }
+ with patch.dict(docker_mod.__salt__, dunder_salt):
+ ret = docker_mod.logout("portus.example.com:5000")
+ assert "retcode" in ret
+ assert ret["retcode"] == 0
+ docker_mock.assert_called_with(
+ ["docker", "logout", "portus.example.com:5000"],
+ python_shell=False,
+ output_loglevel="quiet",
+ )
+
+
+ def test_logout_calls_docker_cli_logout_all(self):
+ client = Mock()
+ get_client_mock = MagicMock(return_value=client)
+ ref_out = {"stdout": "", "stderr": "", "retcode": 0}
+ registry_auth_data = {
+ "portus.example.com:5000": {
+ "username": "admin",
+ "password": "linux12345",
+ "email": "tux(a)example.com",
+ },
+ "portus2.example.com:5000": {
+ "username": "admin",
+ "password": "linux12345",
+ "email": "tux(a)example.com",
+ },
+ }
+
+ docker_mock = MagicMock(return_value=ref_out)
+ with patch.object(docker_mod, "_get_client", get_client_mock):
+ dunder_salt = {
+ "config.get": MagicMock(return_value=registry_auth_data),
+ "cmd.run_all": docker_mock,
+ "config.option": MagicMock(return_value={}),
+ }
+ with patch.dict(docker_mod.__salt__, dunder_salt):
+ ret = docker_mod.logout()
+ assert "retcode" in ret
+ assert ret["retcode"] == 0
+ assert docker_mock.call_count == 2
+
def test_ps_with_host_true(self):
'''
Check that docker.ps called with host is ``True``,
--
2.26.2
++++++ add-environment-variable-to-know-if-yum-is-invoked-f.patch ++++++
>From 874b1229babf5244debac141cd260f695ccc1e9d Mon Sep 17 00:00:00 2001
From: Marcelo Chiaradia <mchiaradia(a)suse.com>
Date: Thu, 7 Jun 2018 10:29:41 +0200
Subject: [PATCH] Add environment variable to know if yum is invoked from
Salt(bsc#1057635)
---
salt/modules/yumpkg.py | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/salt/modules/yumpkg.py b/salt/modules/yumpkg.py
index f7e4ac9753..c89d321a1b 100644
--- a/salt/modules/yumpkg.py
+++ b/salt/modules/yumpkg.py
@@ -913,7 +913,8 @@ def list_repo_pkgs(*args, **kwargs):
yum_version = None if _yum() != 'yum' else _LooseVersion(
__salt__['cmd.run'](
['yum', '--version'],
- python_shell=False
+ python_shell=False,
+ env={"SALT_RUNNING": '1'}
).splitlines()[0].strip()
)
# Really old version of yum; does not even have --showduplicates option
@@ -2324,7 +2325,8 @@ def list_holds(pattern=__HOLD_PATTERN, full=True):
_check_versionlock()
out = __salt__['cmd.run']([_yum(), 'versionlock', 'list'],
- python_shell=False)
+ python_shell=False,
+ env={"SALT_RUNNING": '1'})
ret = []
for line in salt.utils.itertools.split(out, '\n'):
match = _get_hold(line, pattern=pattern, full=full)
@@ -2390,7 +2392,8 @@ def group_list():
out = __salt__['cmd.run_stdout'](
[_yum(), 'grouplist', 'hidden'],
output_loglevel='trace',
- python_shell=False
+ python_shell=False,
+ env={"SALT_RUNNING": '1'}
)
key = None
for line in salt.utils.itertools.split(out, '\n'):
@@ -2457,7 +2460,8 @@ def group_info(name, expand=False):
out = __salt__['cmd.run_stdout'](
cmd,
output_loglevel='trace',
- python_shell=False
+ python_shell=False,
+ env={"SALT_RUNNING": '1'}
)
g_info = {}
@@ -3134,7 +3138,8 @@ def download(*packages):
__salt__['cmd.run'](
cmd,
output_loglevel='trace',
- python_shell=False
+ python_shell=False,
+ env={"SALT_RUNNING": '1'}
)
ret = {}
for dld_result in os.listdir(CACHE_DIR):
@@ -3209,7 +3214,8 @@ def _get_patches(installed_only=False):
cmd = [_yum(), '--quiet', 'updateinfo', 'list', 'all']
ret = __salt__['cmd.run_stdout'](
cmd,
- python_shell=False
+ python_shell=False,
+ env={"SALT_RUNNING": '1'}
)
for line in salt.utils.itertools.split(ret, os.linesep):
inst, advisory_id, sev, pkg = re.match(r'([i|\s]) ([^\s]+) +([^\s]+) +([^\s]+)',
--
2.16.4
++++++ add-hold-unhold-functions.patch ++++++
>From 666f62917bbc48cbee2ed0aa319a61afd1b1fcb2 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Thu, 6 Dec 2018 16:26:23 +0100
Subject: [PATCH] Add hold/unhold functions
Add unhold function
Add warnings
---
salt/modules/zypperpkg.py | 88 ++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 87 insertions(+), 1 deletion(-)
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index 50279ccbd1..08a9c2ed4d 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -41,6 +41,7 @@ import salt.utils.pkg
import salt.utils.pkg.rpm
import salt.utils.stringutils
import salt.utils.systemd
+import salt.utils.versions
from salt.utils.versions import LooseVersion
import salt.utils.environment
from salt.exceptions import CommandExecutionError, MinionError, SaltInvocationError
@@ -1771,7 +1772,7 @@ def clean_locks():
return out
-def remove_lock(packages, **kwargs): # pylint: disable=unused-argument
+def unhold(name=None, pkgs=None, **kwargs):
'''
Remove specified package lock.
@@ -1783,7 +1784,47 @@ def remove_lock(packages, **kwargs): # pylint: disable=unused-argument
salt '*' pkg.remove_lock <package1>,<package2>,<package3>
salt '*' pkg.remove_lock pkgs='["foo", "bar"]'
'''
+ ret = {}
+ if (not name and not pkgs) or (name and pkgs):
+ raise CommandExecutionError('Name or packages must be specified.')
+ elif name:
+ pkgs = [name]
+
+ locks = list_locks()
+ try:
+ pkgs = list(__salt__['pkg_resource.parse_targets'](pkgs)[0].keys())
+ except MinionError as exc:
+ raise CommandExecutionError(exc)
+
+ removed = []
+ missing = []
+ for pkg in pkgs:
+ if locks.get(pkg):
+ removed.append(pkg)
+ ret[pkg]['comment'] = 'Package {0} is no longer held.'.format(pkg)
+ else:
+ missing.append(pkg)
+ ret[pkg]['comment'] = 'Package {0} unable to be unheld.'.format(pkg)
+
+ if removed:
+ __zypper__.call('rl', *removed)
+
+ return ret
+
+
+def remove_lock(packages, **kwargs): # pylint: disable=unused-argument
+ '''
+ Remove specified package lock.
+
+ CLI Example:
+
+ .. code-block:: bash
+ salt '*' pkg.remove_lock <package name>
+ salt '*' pkg.remove_lock <package1>,<package2>,<package3>
+ salt '*' pkg.remove_lock pkgs='["foo", "bar"]'
+ '''
+ salt.utils.versions.warn_until('Sodium', 'This function is deprecated. Please use unhold() instead.')
locks = list_locks()
try:
packages = list(__salt__['pkg_resource.parse_targets'](packages)[0].keys())
@@ -1804,6 +1845,50 @@ def remove_lock(packages, **kwargs): # pylint: disable=unused-argument
return {'removed': len(removed), 'not_found': missing}
+def hold(name=None, pkgs=None, **kwargs):
+ '''
+ Add a package lock. Specify packages to lock by exact name.
+
+ CLI Example:
+
+ .. code-block:: bash
+
+ salt '*' pkg.add_lock <package name>
+ salt '*' pkg.add_lock <package1>,<package2>,<package3>
+ salt '*' pkg.add_lock pkgs='["foo", "bar"]'
+
+ :param name:
+ :param pkgs:
+ :param kwargs:
+ :return:
+ '''
+ ret = {}
+ if (not name and not pkgs) or (name and pkgs):
+ raise CommandExecutionError('Name or packages must be specified.')
+ elif name:
+ pkgs = [name]
+
+ locks = list_locks()
+ added = []
+ try:
+ pkgs = list(__salt__['pkg_resource.parse_targets'](pkgs)[0].keys())
+ except MinionError as exc:
+ raise CommandExecutionError(exc)
+
+ for pkg in pkgs:
+ ret[pkg] = {'name': pkg, 'changes': {}, 'result': False, 'comment': ''}
+ if not locks.get(pkg):
+ added.append(pkg)
+ ret[pkg]['comment'] = 'Package {0} is now being held.'.format(pkg)
+ else:
+ ret[pkg]['comment'] = 'Package {0} is already set to be held.'.format(pkg)
+
+ if added:
+ __zypper__.call('al', *added)
+
+ return ret
+
+
def add_lock(packages, **kwargs): # pylint: disable=unused-argument
'''
Add a package lock. Specify packages to lock by exact name.
@@ -1816,6 +1901,7 @@ def add_lock(packages, **kwargs): # pylint: disable=unused-argument
salt '*' pkg.add_lock <package1>,<package2>,<package3>
salt '*' pkg.add_lock pkgs='["foo", "bar"]'
'''
+ salt.utils.versions.warn_until('Sodium', 'This function is deprecated. Please use hold() instead.')
locks = list_locks()
added = []
try:
--
2.16.4
++++++ add-ip-filtering-by-network.patch ++++++
>From a8615ab8f3debdc5962ecda5c52a432987bde02a Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Marcus=20R=C3=BCckert?= <darix(a)nordisch.org>
Date: Thu, 20 Feb 2020 17:13:31 +0100
Subject: [PATCH] Add IP filtering by network
IPs are filtered out if they don't belong to any of the given networks.
If `None` is passed as the network, all IPs are returned. An empty list
rejects all IPs.
Example:
{% set networks = ['192.168.0.0/24', 'fe80::/64'] %}
{{ grains['ip_interfaces'] | filter_by_networks(networks) }}
{{ grains['ipv6'] | filter_by_networks(networks) }}
{{ grains['ipv4'] | filter_by_networks(networks) }}
Fixes #212
Co-authored-by: Alexander Graul <agraul(a)suse.com>
Add unit tests for filter_by_networks
---
salt/utils/network.py | 35 +++++++++++++++++++++-
tests/unit/utils/test_network.py | 50 ++++++++++++++++++++++++++++++++
2 files changed, 84 insertions(+), 1 deletion(-)
diff --git a/salt/utils/network.py b/salt/utils/network.py
index def997f3dc36654efacff00bf14d00c43c7ff7d9..09fb0ac2346a27e5383fc2904b85011e467e5fb8 100644
--- a/salt/utils/network.py
+++ b/salt/utils/network.py
@@ -5,7 +5,9 @@ Define some generic socket functions for network modules
'''
# Import python libs
-from __future__ import absolute_import, unicode_literals, print_function
+from __future__ import absolute_import, print_function, unicode_literals
+
+import collections
import itertools
import os
import re
@@ -1987,3 +1989,34 @@ def is_fqdn(hostname):
compliant = re.compile(r"(?!-)[A-Z\d\-\_]{1,63}(?<!-)$", re.IGNORECASE)
return "." in hostname and len(hostname) < 0xff and all(compliant.match(x) for x in hostname.rstrip(".").split("."))
+
+
+@jinja_filter("filter_by_networks")
+def filter_by_networks(values, networks):
+ """
+ Returns the list of IPs filtered by the network list.
+ If the network list is an empty sequence, no IPs are returned.
+ If the network list is None, all IPs are returned.
+
+ {% set networks = ['192.168.0.0/24', 'fe80::/64'] %}
+ {{ grains['ip_interfaces'] | filter_by_networks(networks) }}
+ {{ grains['ipv6'] | filter_by_networks(networks) }}
+ {{ grains['ipv4'] | filter_by_networks(networks) }}
+ """
+
+ _filter = lambda ips, networks: [
+ ip for ip in ips for net in networks if ipaddress.ip_address(ip) in net
+ ]
+
+ if networks is not None:
+ networks = [ipaddress.ip_network(network) for network in networks]
+ if isinstance(values, collections.Mapping):
+ return {
+ interface: _filter(values[interface], networks) for interface in values
+ }
+ elif isinstance(values, collections.Sequence):
+ return _filter(values, networks)
+ else:
+ raise ValueError("Do not know how to filter a {}".format(type(values)))
+ else:
+ return values
diff --git a/tests/unit/utils/test_network.py b/tests/unit/utils/test_network.py
index 74479b0cae87a33b0ecc9648f568c4d87bb6e445..b734f862f68edbd6cf25658ff1a2d25a0c2f4383 100644
--- a/tests/unit/utils/test_network.py
+++ b/tests/unit/utils/test_network.py
@@ -13,6 +13,8 @@ from tests.support.mock import (
create_autospec,
patch,
)
+import pytest
+import salt.exceptions
# Import salt libs
import salt.utils.network as network
@@ -725,3 +727,51 @@ class NetworkTestCase(TestCase):
"""
for fqdn in ["hostname", "/some/path", "$variable.here", "verylonghostname.{}".format("domain" * 45)]:
assert not network.is_fqdn(fqdn)
+
+ def test_filter_by_networks_with_no_filter(self):
+ ips = ["10.0.123.200", "10.10.10.10"]
+ with pytest.raises(TypeError):
+ network.filter_by_networks(ips) # pylint: disable=no-value-for-parameter
+
+ def test_filter_by_networks_empty_filter(self):
+ ips = ["10.0.123.200", "10.10.10.10"]
+ assert network.filter_by_networks(ips, []) == []
+
+ def test_filter_by_networks_ips_list(self):
+ ips = [
+ "10.0.123.200",
+ "10.10.10.10",
+ "193.124.233.5",
+ "fe80::d210:cf3f:64e7:5423",
+ ]
+ networks = ["10.0.0.0/8", "fe80::/64"]
+ assert network.filter_by_networks(ips, networks) == [
+ "10.0.123.200",
+ "10.10.10.10",
+ "fe80::d210:cf3f:64e7:5423",
+ ]
+
+ def test_filter_by_networks_interfaces_dict(self):
+ interfaces = {
+ "wlan0": ["192.168.1.100", "217.5.140.67", "2001:db8::ff00:42:8329"],
+ "eth0": [
+ "2001:0DB8:0:CD30:123:4567:89AB:CDEF",
+ "192.168.1.101",
+ "10.0.123.201",
+ ],
+ }
+ assert network.filter_by_networks(
+ interfaces, ["192.168.1.0/24", "2001:db8::/48"]
+ ) == {
+ "wlan0": ["192.168.1.100", "2001:db8::ff00:42:8329"],
+ "eth0": ["2001:0DB8:0:CD30:123:4567:89AB:CDEF", "192.168.1.101"],
+ }
+
+ def test_filter_by_networks_catch_all(self):
+ ips = [
+ "10.0.123.200",
+ "10.10.10.10",
+ "193.124.233.5",
+ "fe80::d210:cf3f:64e7:5423",
+ ]
+ assert ips == network.filter_by_networks(ips, ["0.0.0.0/0", "::/0"])
--
2.23.0
++++++ add-missing-_utils-at-loader-grains_func.patch ++++++
>From 082fa07e5301414b5b834b731aaa96bd5d966de7 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Tue, 10 Mar 2020 13:16:05 +0000
Subject: [PATCH] Add missing _utils at loader grains_func
---
salt/loader.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/salt/loader.py b/salt/loader.py
index c68562988d..742b2f8e22 100644
--- a/salt/loader.py
+++ b/salt/loader.py
@@ -683,6 +683,7 @@ def grain_funcs(opts, proxy=None):
__opts__ = salt.config.minion_config('/etc/salt/minion')
grainfuncs = salt.loader.grain_funcs(__opts__)
'''
+ _utils = utils(opts)
ret = LazyLoader(
_module_dirs(
opts,
--
2.23.0
++++++ add-missing-fun-for-returns-from-wfunc-executions.patch ++++++
>From 5c25babafd4e4bbe55626713851ea5d6345c43d1 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 9 Oct 2019 13:03:33 +0100
Subject: [PATCH] Add missing 'fun' for returns from wfunc executions
---
salt/client/ssh/__init__.py | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/salt/client/ssh/__init__.py b/salt/client/ssh/__init__.py
index 4881540837..1373274739 100644
--- a/salt/client/ssh/__init__.py
+++ b/salt/client/ssh/__init__.py
@@ -682,6 +682,8 @@ class SSH(object):
data = {'return': data}
if 'id' not in data:
data['id'] = id_
+ if 'fun' not in data:
+ data['fun'] = fun
data['jid'] = jid # make the jid in the payload the same as the jid in the tag
self.event.fire_event(
data,
@@ -797,6 +799,8 @@ class SSH(object):
data = {'return': data}
if 'id' not in data:
data['id'] = id_
+ if 'fun' not in data:
+ data['fun'] = fun
data['jid'] = jid # make the jid in the payload the same as the jid in the tag
self.event.fire_event(
data,
--
2.16.4
++++++ add-multi-file-support-and-globbing-to-the-filetree-.patch ++++++
>From 0a6b5e92a4a74dee94eb33a939600f8c2e429c01 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Fri, 12 Oct 2018 16:20:40 +0200
Subject: [PATCH] Add multi-file support and globbing to the filetree
(U#50018)
Add more possible logs
Support multiple files grabbing
Collect system logs and boot logs
Support globbing in filetree
---
salt/cli/support/intfunc.py | 49 +++++++++++++++++++++--------------
salt/cli/support/profiles/default.yml | 7 +++++
2 files changed, 37 insertions(+), 19 deletions(-)
diff --git a/salt/cli/support/intfunc.py b/salt/cli/support/intfunc.py
index 2727cd6394..f15f4d4097 100644
--- a/salt/cli/support/intfunc.py
+++ b/salt/cli/support/intfunc.py
@@ -6,6 +6,7 @@ Internal functions.
from __future__ import absolute_import, print_function, unicode_literals
import os
+import glob
from salt.cli.support.console import MessagesOutput
import salt.utils.files
@@ -13,7 +14,7 @@ import salt.utils.files
out = MessagesOutput()
-def filetree(collector, path):
+def filetree(collector, *paths):
'''
Add all files in the tree. If the "path" is a file,
only that file will be added.
@@ -21,22 +22,32 @@ def filetree(collector, path):
:param path: File or directory
:return:
'''
- if not path:
- out.error('Path not defined', ident=2)
- else:
- # The filehandler needs to be explicitly passed here, so PyLint needs to accept that.
- # pylint: disable=W8470
- if os.path.isfile(path):
- filename = os.path.basename(path)
- try:
- file_ref = salt.utils.files.fopen(path) # pylint: disable=W
- out.put('Add {}'.format(filename), indent=2)
- collector.add(filename)
- collector.link(title=path, path=file_ref)
- except Exception as err:
- out.error(err, ident=4)
- # pylint: enable=W8470
+ _paths = []
+ # Unglob
+ for path in paths:
+ _paths += glob.glob(path)
+ for path in set(_paths):
+ if not path:
+ out.error('Path not defined', ident=2)
+ elif not os.path.exists(path):
+ out.warning('Path {} does not exists'.format(path))
else:
- for fname in os.listdir(path):
- fname = os.path.join(path, fname)
- filetree(collector, fname)
+ # The filehandler needs to be explicitly passed here, so PyLint needs to accept that.
+ # pylint: disable=W8470
+ if os.path.isfile(path):
+ filename = os.path.basename(path)
+ try:
+ file_ref = salt.utils.files.fopen(path) # pylint: disable=W
+ out.put('Add {}'.format(filename), indent=2)
+ collector.add(filename)
+ collector.link(title=path, path=file_ref)
+ except Exception as err:
+ out.error(err, ident=4)
+ # pylint: enable=W8470
+ else:
+ try:
+ for fname in os.listdir(path):
+ fname = os.path.join(path, fname)
+ filetree(collector, [fname])
+ except Exception as err:
+ out.error(err, ident=4)
diff --git a/salt/cli/support/profiles/default.yml b/salt/cli/support/profiles/default.yml
index 01d9a26193..3defb5eef3 100644
--- a/salt/cli/support/profiles/default.yml
+++ b/salt/cli/support/profiles/default.yml
@@ -62,10 +62,17 @@ general-health:
- ps.top:
info: Top CPU consuming processes
+boot_log:
+ - filetree:
+ info: Collect boot logs
+ args:
+ - /var/log/boot.*
+
system.log:
# This works on any file system object.
- filetree:
info: Add system log
args:
- /var/log/syslog
+ - /var/log/messages
--
2.16.4
++++++ add-new-custom-suse-capability-for-saltutil-state-mo.patch ++++++
>From ad1323b4f83fa8f2954c0a965f4acaf91575a59b Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Thu, 26 Mar 2020 13:08:16 +0000
Subject: [PATCH] Add new custom SUSE capability for saltutil state
module
---
salt/grains/extra.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/salt/grains/extra.py b/salt/grains/extra.py
index 1082b05dba7830ee53078cff86b5183b5eea2829..b30ab0091fee7cda8f74b861e9e9f95f8ad85b39 100644
--- a/salt/grains/extra.py
+++ b/salt/grains/extra.py
@@ -80,5 +80,6 @@ def config():
def suse_backported_capabilities():
return {
'__suse_reserved_pkg_all_versions_support': True,
- '__suse_reserved_pkg_patches_support': True
+ '__suse_reserved_pkg_patches_support': True,
+ '__suse_reserved_saltutil_states_support': True
}
--
2.23.0
++++++ add-publish_batch-to-clearfuncs-exposed-methods.patch ++++++
>From a7fd55e534ed985800be53bcb54e4c51aba7516a Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Thu, 28 May 2020 09:37:08 +0100
Subject: [PATCH] Add publish_batch to ClearFuncs exposed methods
---
salt/master.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/salt/master.py b/salt/master.py
index e42b8385c3935726678d3b726f3b497f7dcef9ab..04ff721cd99e1fc9bffb46ab234c1e7055c3b7a0 100644
--- a/salt/master.py
+++ b/salt/master.py
@@ -1906,7 +1906,7 @@ class ClearFuncs(TransportMethods):
# These methods will be exposed to the transport layer by
# MWorker._handle_clear
expose_methods = (
- 'ping', 'publish', 'get_token', 'mk_token', 'wheel', 'runner',
+ 'ping', 'publish', 'publish_batch', 'get_token', 'mk_token', 'wheel', 'runner',
)
# The ClearFuncs object encapsulates the functions that can be executed in
--
2.23.0
++++++ add-saltssh-multi-version-support-across-python-inte.patch ++++++
>From 369567107fa18187f8cbc5040728037d0774287b Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Mon, 12 Mar 2018 12:01:39 +0100
Subject: [PATCH] Add SaltSSH multi-version support across Python
interpeters.
Bugfix: crashes when OPTIONS.saltdir is a file
salt-ssh: allow server and client to run different python major version
Handle non-directory on the /tmp
Bugfix: prevent partial fileset removal in /tmp
salt-ssh: compare checksums to detect newly generated thin on the server
Reset time at thin unpack
Bugfix: get a proper option for CLI and opts of wiping the tmp
Add docstring to get_tops
Remove unnecessary noise in imports
Refactor get_tops collector
Add logging to the get_tops
Update call script
Remove pre-caution
Update log debug message for tops collector
Reset default compression, if unknown is passed
Refactor archive creation flow
Add external shell-callable function to collect tops
Simplify tops gathering, bugfix alternative to Py2
find working executable
Add basic shareable module classifier
Add proper error handler, unmuting exceptions during top collection
Use common shared directory for compatible libraries
fix searching for python versions
Flatten error message string
Bail-out immediately if <2.6 version detected
Simplify shell cmd to get the version on Python 2.x
Remove stub that was previously moved upfront
Lintfix: PEP8 ident
Add logging on the error, when Python-2 version cannot be detected properly
Generate salt-call source, based on conditions
Add logging on remove failure on thin.tgz archive
Add config-based external tops gatherer
Change signature to pass the extended configuration to the thin generator
Update docstring to the salt-call generator
Implement get namespaces inclusion to the salt-call script on the client machine
Use new signature of the get call
Implement namespace selector, based on the current Python interpreter version
Add deps as a list, instead of a map
Add debug logging
Implement packaging an alternative version
Update salt-call script so it swaps the namespace according to the configuration
Compress thin.zip if zlib is available
Fix a system exit error message
Move compression fall-back operation
Add debug logging prior to the thin archive removal
Flatten the archive extension choice
Lintfix: PEP8 an empty line required
Bugfix: ZFS modules (zfs, zpool) crashes on non-ZFS systems
Add unit test case for the Salt SSH parts
Add unit test for missing dependencies on get_ext_tops
Postpone inheritance implementation
Refactor unit test for get_ext_tops
Add unit test for get_ext_tops checks interpreter configuration
Check python interpreter lock version
Add unit test for get_ext_tops checks the python locked interepreter value
Bugfix: report into warning log module name, not its config
Add unit test for dependencies check python version lock (inherently)
Mock os.path.isfile function
Update warning logging information
Add unit test for get_ext_tops module configuration validation
Do not use list of dicts for namespaces, just dict for namespaces.
Add unit test for get_ext_tops config verification
Fix unit tests for the new config structure
Add unit test for thin.gte call
Add unit test for dependency path adding function
Add unit test for thin_path function
Add unit test for salt-call source generator
Add unit test for get_ext_namespaces on empty configuration
Add get_ext_namespaces for namespace extractions into a tuple for python version
Remove unused variable
Add unit test for getting namespace failure when python maj/min versions are not defined
Add unit test to add tops based on the current interpreter
Add unit test for get_tops with extra modules
Add unit test for shared object modules top addition
Add unit test for thin_sum hashing
Add unit test for min_sum hashing
Add unit test for gen_thin verify for 2.6 Python version is a minimum requirement
Fix gen_thin exception on Python 3
Use object attribute instead of indeces. Remove an empty line.
Add unit test for gen_thin compression type fallback
Move helper functions up by the class code
Update unit test doc
Add check for correct archiving mode is opened
Add unit test for gen_thin if control files are written correctly
Update docstring for fake version info constructor method
Add fake tarfile mock handler
Mock-out missing methods inside gen_thin
Move tarfile.open check to the end of the test
Add unit test for tree addition to the archive
Add shareable module to the gen_thin unit test
Fix docstring
Add unit test for an alternative version pack
Lintfix
Add documentation about updated Salt SSH features
Fix typo
Lintfix: PEP8 extra-line needed
Make the command more readable
Write all supported minimal python versions into a config file on the target machine
Get supported Python executable based on the config py-map
Add unit test for get_supported_py_config function typecheck
Add unit test for get_supported_py_config function base tops
Add unit test for get_supported_py_config function ext tops
Fix unit test for catching "supported-versions" was written down
Rephrase Salt SSH doc description
Re-phrase docstring for alternative Salt installation
require same major version while minor is allowed to be higher
Bugfix: remove minor version from the namespaced, version-specific directory
Fix unit tests for minor version removal of namespaced version-specific directory
Initialise the options directly to be structure-ready object.
Disable wiping if state is executed
Properly mock a tempfile object
Support Python 2.6 versions
Add digest collector for file trees etc
Bufix: recurse calls damages the configuration (reference problem)
Collect digest of the code
Get code checksum into the shim options
Get all the code content, not just Python sources
Bugfix: Python3 compat - string required instead of bytes
Lintfix: too many empty lines
Lintfix: blocked function used
Bugfix: key error master_tops_first
Fix unit tests for the checksum generator
Use code checksum to update thin archive on client's cache
Lintfix
Set master_top_first to False by default
---
doc/topics/releases/fluorine.rst | 178 +++++++++++++++++++++++++++++++++++++++
salt/client/ssh/ssh_py_shim.py | 4 +
salt/utils/thin.py | 1 +
3 files changed, 183 insertions(+)
create mode 100644 doc/topics/releases/fluorine.rst
diff --git a/doc/topics/releases/fluorine.rst b/doc/topics/releases/fluorine.rst
new file mode 100644
index 0000000000..40c69e25cc
--- /dev/null
+++ b/doc/topics/releases/fluorine.rst
@@ -0,0 +1,178 @@
+:orphan:
+
+======================================
+Salt Release Notes - Codename Fluorine
+======================================
+
+
+Minion Startup Events
+---------------------
+
+When a minion starts up it sends a notification on the event bus with a tag
+that looks like this: `salt/minion/<minion_id>/start`. For historical reasons
+the minion also sends a similar event with an event tag like this:
+`minion_start`. This duplication can cause a lot of clutter on the event bus
+when there are many minions. Set `enable_legacy_startup_events: False` in the
+minion config to ensure only the `salt/minion/<minion_id>/start` events are
+sent.
+
+The new :conf_minion:`enable_legacy_startup_events` minion config option
+defaults to ``True``, but will be set to default to ``False`` beginning with
+the Neon release of Salt.
+
+The Salt Syndic currently sends an old style `syndic_start` event as well. The
+syndic respects :conf_minion:`enable_legacy_startup_events` as well.
+
+
+Deprecations
+------------
+
+Module Deprecations
+===================
+
+The ``napalm_network`` module had the following changes:
+
+- Support for the ``template_path`` has been removed in the ``load_template``
+ function. This is because support for NAPALM native templates has been
+ dropped.
+
+The ``trafficserver`` module had the following changes:
+
+- Support for the ``match_var`` function was removed. Please use the
+ ``match_metric`` function instead.
+- Support for the ``read_var`` function was removed. Please use the
+ ``read_config`` function instead.
+- Support for the ``set_var`` function was removed. Please use the
+ ``set_config`` function instead.
+
+The ``win_update`` module has been removed. It has been replaced by ``win_wua``
+module.
+
+The ``win_wua`` module had the following changes:
+
+- Support for the ``download_update`` function has been removed. Please use the
+ ``download`` function instead.
+- Support for the ``download_updates`` function has been removed. Please use the
+ ``download`` function instead.
+- Support for the ``install_update`` function has been removed. Please use the
+ ``install`` function instead.
+- Support for the ``install_updates`` function has been removed. Please use the
+ ``install`` function instead.
+- Support for the ``list_update`` function has been removed. Please use the
+ ``get`` function instead.
+- Support for the ``list_updates`` function has been removed. Please use the
+ ``list`` function instead.
+
+Pillar Deprecations
+===================
+
+The ``vault`` pillar had the following changes:
+
+- Support for the ``profile`` argument was removed. Any options passed up until
+ and following the first ``path=`` are discarded.
+
+Roster Deprecations
+===================
+
+The ``cache`` roster had the following changes:
+
+- Support for ``roster_order`` as a list or tuple has been removed. As of the
+ ``Fluorine`` release, ``roster_order`` must be a dictionary.
+- The ``roster_order`` option now includes IPv6 in addition to IPv4 for the
+ ``private``, ``public``, ``global`` or ``local`` settings. The syntax for these
+ settings has changed to ``ipv4-*`` or ``ipv6-*``, respectively.
+
+State Deprecations
+==================
+
+The ``docker`` state has been removed. The following functions should be used
+instead.
+
+- The ``docker.running`` function was removed. Please update applicable SLS files
+ to use the ``docker_container.running`` function instead.
+- The ``docker.stopped`` function was removed. Please update applicable SLS files
+ to use the ``docker_container.stopped`` function instead.
+- The ``docker.absent`` function was removed. Please update applicable SLS files
+ to use the ``docker_container.absent`` function instead.
+- The ``docker.absent`` function was removed. Please update applicable SLS files
+ to use the ``docker_container.absent`` function instead.
+- The ``docker.network_present`` function was removed. Please update applicable
+ SLS files to use the ``docker_network.present`` function instead.
+- The ``docker.network_absent`` function was removed. Please update applicable
+ SLS files to use the ``docker_network.absent`` function instead.
+- The ``docker.image_present`` function was removed. Please update applicable SLS
+ files to use the ``docker_image.present`` function instead.
+- The ``docker.image_absent`` function was removed. Please update applicable SLS
+ files to use the ``docker_image.absent`` function instead.
+- The ``docker.volume_present`` function was removed. Please update applicable SLS
+ files to use the ``docker_volume.present`` function instead.
+- The ``docker.volume_absent`` function was removed. Please update applicable SLS
+ files to use the ``docker_volume.absent`` function instead.
+
+The ``docker_network`` state had the following changes:
+
+- Support for the ``driver`` option has been removed from the ``absent`` function.
+ This option had no functionality in ``docker_network.absent``.
+
+The ``git`` state had the following changes:
+
+- Support for the ``ref`` option in the ``detached`` state has been removed.
+ Please use the ``rev`` option instead.
+
+The ``k8s`` state has been removed. The following functions should be used
+instead:
+
+- The ``k8s.label_absent`` function was removed. Please update applicable SLS
+ files to use the ``kubernetes.node_label_absent`` function instead.
+- The ``k8s.label_present`` function was removed. Please updated applicable SLS
+ files to use the ``kubernetes.node_label_present`` function instead.
+- The ``k8s.label_folder_absent`` function was removed. Please update applicable
+ SLS files to use the ``kubernetes.node_label_folder_absent`` function instead.
+
+The ``netconfig`` state had the following changes:
+
+- Support for the ``template_path`` option in the ``managed`` state has been
+ removed. This is because support for NAPALM native templates has been dropped.
+
+The ``trafficserver`` state had the following changes:
+
+- Support for the ``set_var`` function was removed. Please use the ``config``
+ function instead.
+
+The ``win_update`` state has been removed. Please use the ``win_wua`` state instead.
+
+SaltSSH major updates
+=====================
+
+SaltSSH now works across different major Python versions. Python 2.7 ~ Python 3.x
+are now supported transparently. Requirement is, however, that the SaltMaster should
+have installed Salt, including all related dependencies for Python 2 and Python 3.
+Everything needs to be importable from the respective Python environment.
+
+SaltSSH can bundle up an arbitrary version of Salt. If there would be an old box for
+example, running an outdated and unsupported Python 2.6, it is still possible from
+a SaltMaster with Python 3.5 or newer to access it. This feature requires an additional
+configuration in /etc/salt/master as follows:
+
+
+.. code-block:: yaml
+
+ ssh_ext_alternatives:
+ 2016.3: # Namespace, can be actually anything.
+ py-version: [2, 6] # Constraint to specific interpreter version
+ path: /opt/2016.3/salt # Main Salt installation
+ dependencies: # List of dependencies and their installation paths
+ jinja2: /opt/jinja2
+ yaml: /opt/yaml
+ tornado: /opt/tornado
+ msgpack: /opt/msgpack
+ certifi: /opt/certifi
+ singledispatch: /opt/singledispatch.py
+ singledispatch_helpers: /opt/singledispatch_helpers.py
+ markupsafe: /opt/markupsafe
+ backports_abc: /opt/backports_abc.py
+
+It is also possible to use several alternative versions of Salt. You can for instance generate
+a minimal tarball using runners and include that. But this is only possible, when such specific
+Salt version is also available on the Master machine, although does not need to be directly
+installed together with the older Python interpreter.
diff --git a/salt/client/ssh/ssh_py_shim.py b/salt/client/ssh/ssh_py_shim.py
index cd7549a178..95b3931a32 100644
--- a/salt/client/ssh/ssh_py_shim.py
+++ b/salt/client/ssh/ssh_py_shim.py
@@ -165,6 +165,9 @@ def unpack_thin(thin_path):
old_umask = os.umask(0o077) # pylint: disable=blacklisted-function
tfile.extractall(path=OPTIONS.saltdir)
tfile.close()
+ checksum_path = os.path.normpath(os.path.join(OPTIONS.saltdir, "thin_checksum"))
+ with open(checksum_path, 'w') as chk:
+ chk.write(OPTIONS.checksum + '\n')
os.umask(old_umask) # pylint: disable=blacklisted-function
try:
os.unlink(thin_path)
@@ -358,5 +361,6 @@ def main(argv): # pylint: disable=W0613
return retcode
+
if __name__ == '__main__':
sys.exit(main(sys.argv))
diff --git a/salt/utils/thin.py b/salt/utils/thin.py
index 8496db9569..0ff31cef39 100644
--- a/salt/utils/thin.py
+++ b/salt/utils/thin.py
@@ -9,6 +9,7 @@ from __future__ import absolute_import, print_function, unicode_literals
import copy
import logging
import os
+import copy
import shutil
import subprocess
import sys
--
2.16.4
++++++ add-standalone-configuration-file-for-enabling-packa.patch ++++++
>From 717c9bc6cb81994c5f23de87cfa91112fa7bf89c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 22 May 2019 13:00:46 +0100
Subject: [PATCH] Add standalone configuration file for enabling package
formulas
---
conf/suse/standalone-formulas-configuration.conf | 4 ++++
1 file changed, 4 insertions(+)
create mode 100644 conf/suse/standalone-formulas-configuration.conf
diff --git a/conf/suse/standalone-formulas-configuration.conf b/conf/suse/standalone-formulas-configuration.conf
new file mode 100644
index 0000000000..94d05fb2ee
--- /dev/null
+++ b/conf/suse/standalone-formulas-configuration.conf
@@ -0,0 +1,4 @@
+file_roots:
+ base:
+ - /usr/share/salt-formulas/states
+ - /srv/salt
--
2.16.4
++++++ add-supportconfig-module-for-remote-calls-and-saltss.patch ++++++
++++ 1407 lines (skipped)
++++++ add-virt.all_capabilities.patch ++++++
>From 82ddc9d93f6c0d6bc1e8dc6ebd30d6809d9f4d8f Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat(a)suse.com>
Date: Thu, 18 Oct 2018 13:32:59 +0200
Subject: [PATCH] Add virt.all_capabilities
In order to get all possible capabilities from a host, the user has to
call virt.capabilities, and then loop over the guests and domains
before calling virt.domain_capabilities for each of them.
This commit embeds all this logic to get them all in a single
virt.all_capabilities call.
---
salt/modules/virt.py | 107 +++++++++++++++++++++++++++++-----------
tests/unit/modules/test_virt.py | 56 +++++++++++++++++++++
2 files changed, 134 insertions(+), 29 deletions(-)
diff --git a/salt/modules/virt.py b/salt/modules/virt.py
index a2412bb745..3889238ecd 100644
--- a/salt/modules/virt.py
+++ b/salt/modules/virt.py
@@ -4254,37 +4254,10 @@ def _parse_caps_loader(node):
return result
-def domain_capabilities(emulator=None, arch=None, machine=None, domain=None, **kwargs):
+def _parse_domain_caps(caps):
'''
- Return the domain capabilities given an emulator, architecture, machine or virtualization type.
-
- .. versionadded:: 2019.2.0
-
- :param emulator: return the capabilities for the given emulator binary
- :param arch: return the capabilities for the given CPU architecture
- :param machine: return the capabilities for the given emulated machine type
- :param domain: return the capabilities for the given virtualization type.
- :param connection: libvirt connection URI, overriding defaults
- :param username: username to connect with, overriding defaults
- :param password: password to connect with, overriding defaults
-
- The list of the possible emulator, arch, machine and domain can be found in
- the host capabilities output.
-
- If none of the parameters is provided the libvirt default domain capabilities
- will be returned.
-
- CLI Example:
-
- .. code-block:: bash
-
- salt '*' virt.domain_capabilities arch='x86_64' domain='kvm'
-
+ Parse the XML document of domain capabilities into a structure.
'''
- conn = __get_conn(**kwargs)
- caps = ElementTree.fromstring(conn.getDomainCapabilities(emulator, arch, machine, domain, 0))
- conn.close()
-
result = {
'emulator': caps.find('path').text if caps.find('path') is not None else None,
'domain': caps.find('domain').text if caps.find('domain') is not None else None,
@@ -4324,6 +4297,82 @@ def domain_capabilities(emulator=None, arch=None, machine=None, domain=None, **k
return result
+def domain_capabilities(emulator=None, arch=None, machine=None, domain=None, **kwargs):
+ '''
+ Return the domain capabilities given an emulator, architecture, machine or virtualization type.
+
+ .. versionadded:: Fluorine
+
+ :param emulator: return the capabilities for the given emulator binary
+ :param arch: return the capabilities for the given CPU architecture
+ :param machine: return the capabilities for the given emulated machine type
+ :param domain: return the capabilities for the given virtualization type.
+ :param connection: libvirt connection URI, overriding defaults
+ :param username: username to connect with, overriding defaults
+ :param password: password to connect with, overriding defaults
+
+ The list of the possible emulator, arch, machine and domain can be found in
+ the host capabilities output.
+
+ If none of the parameters is provided, the libvirt default one is returned.
+
+ CLI Example:
+
+ .. code-block:: bash
+
+ salt '*' virt.domain_capabilities arch='x86_64' domain='kvm'
+
+ '''
+ conn = __get_conn(**kwargs)
+ result = []
+ try:
+ caps = ElementTree.fromstring(conn.getDomainCapabilities(emulator, arch, machine, domain, 0))
+ result = _parse_domain_caps(caps)
+ finally:
+ conn.close()
+
+ return result
+
+
+def all_capabilities(**kwargs):
+ '''
+ Return the host and domain capabilities in a single call.
+
+ .. versionadded:: Neon
+
+ :param connection: libvirt connection URI, overriding defaults
+ :param username: username to connect with, overriding defaults
+ :param password: password to connect with, overriding defaults
+
+ CLI Example:
+
+ .. code-block:: bash
+
+ salt '*' virt.all_capabilities
+
+ '''
+ conn = __get_conn(**kwargs)
+ result = {}
+ try:
+ host_caps = ElementTree.fromstring(conn.getCapabilities())
+ domains = [[(guest.get('arch', {}).get('name', None), key)
+ for key in guest.get('arch', {}).get('domains', {}).keys()]
+ for guest in [_parse_caps_guest(guest) for guest in host_caps.findall('guest')]]
+ flattened = [pair for item in (x for x in domains) for pair in item]
+ result = {
+ 'host': {
+ 'host': _parse_caps_host(host_caps.find('host')),
+ 'guests': [_parse_caps_guest(guest) for guest in host_caps.findall('guest')]
+ },
+ 'domains': [_parse_domain_caps(ElementTree.fromstring(
+ conn.getDomainCapabilities(None, arch, None, domain)))
+ for (arch, domain) in flattened]}
+ finally:
+ conn.close()
+
+ return result
+
+
def cpu_baseline(full=False, migratable=False, out='libvirt', **kwargs):
'''
Return the optimal 'custom' CPU baseline config for VM's on this minion
diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py
index 32f4302e5f..94372c6d72 100644
--- a/tests/unit/modules/test_virt.py
+++ b/tests/unit/modules/test_virt.py
@@ -2216,6 +2216,62 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin):
self.assertEqual(expected, caps)
+ def test_all_capabilities(self):
+ '''
+ Test the virt.domain_capabilities default output
+ '''
+ domainXml = '''
+<domainCapabilities>
+ <path>/usr/bin/qemu-system-x86_64</path>
+ <domain>kvm</domain>
+ <machine>virt-2.12</machine>
+ <arch>x86_64</arch>
+ <vcpu max='255'/>
+ <iothreads supported='yes'/>
+</domainCapabilities>
+ '''
+ hostXml = '''
+<capabilities>
+ <host>
+ <uuid>44454c4c-3400-105a-8033-b3c04f4b344a</uuid>
+ <cpu>
+ <arch>x86_64</arch>
+ <model>Nehalem</model>
+ <vendor>Intel</vendor>
+ <microcode version='25'/>
+ <topology sockets='1' cores='4' threads='2'/>
+ </cpu>
+ </host>
+ <guest>
+ <os_type>hvm</os_type>
+ <arch name='x86_64'>
+ <wordsize>64</wordsize>
+ <emulator>/usr/bin/qemu-system-x86_64</emulator>
+ <machine maxCpus='255'>pc-i440fx-2.6</machine>
+ <machine canonical='pc-i440fx-2.6' maxCpus='255'>pc</machine>
+ <machine maxCpus='255'>pc-0.12</machine>
+ <domain type='qemu'/>
+ <domain type='kvm'>
+ <emulator>/usr/bin/qemu-kvm</emulator>
+ <machine maxCpus='255'>pc-i440fx-2.6</machine>
+ <machine canonical='pc-i440fx-2.6' maxCpus='255'>pc</machine>
+ <machine maxCpus='255'>pc-0.12</machine>
+ </domain>
+ </arch>
+ </guest>
+</capabilities>
+ '''
+
+ # pylint: disable=no-member
+ self.mock_conn.getCapabilities.return_value = hostXml
+ self.mock_conn.getDomainCapabilities.side_effect = [
+ domainXml, domainXml.replace('<domain>kvm', '<domain>qemu')]
+ # pylint: enable=no-member
+
+ caps = virt.all_capabilities()
+ self.assertEqual('44454c4c-3400-105a-8033-b3c04f4b344a', caps['host']['host']['uuid'])
+ self.assertEqual(set(['qemu', 'kvm']), set([domainCaps['domain'] for domainCaps in caps['domains']]))
+
def test_network_tag(self):
'''
Test virt._get_net_xml() with VLAN tag
--
2.16.4
++++++ adds-explicit-type-cast-for-port.patch ++++++
>From 2182f2cbc835fee8a95101ce0c722d582b7456aa Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Wed, 1 Apr 2020 16:13:23 +0200
Subject: [PATCH] Adds explicit type cast for port
If a port was passed as a string, the execution logic was broken
and a wrong set of remotes was returned.
The type casting to int solves this issue.
---
salt/utils/network.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/salt/utils/network.py b/salt/utils/network.py
index d6543ff160..def997f3dc 100644
--- a/salt/utils/network.py
+++ b/salt/utils/network.py
@@ -1457,9 +1457,9 @@ def _netlink_tool_remote_on(port, which_end):
local_host, local_port = chunks[3].rsplit(':', 1)
remote_host, remote_port = chunks[4].rsplit(':', 1)
- if which_end == 'remote_port' and int(remote_port) != port:
+ if which_end == 'remote_port' and int(remote_port) != int(port):
continue
- if which_end == 'local_port' and int(local_port) != port:
+ if which_end == 'local_port' and int(local_port) != int(port):
continue
remotes.add(remote_host.strip("[]"))
--
2.16.4
++++++ allow-passing-kwargs-to-pkg.list_downloaded-bsc-1140.patch ++++++
>From 206a2f7c4c1104f2f35dfa2c0b775bef4adc5b91 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 3 Jul 2019 09:34:50 +0100
Subject: [PATCH] Allow passing kwargs to pkg.list_downloaded
(bsc#1140193)
Add unit test for pkg.list_downloaded with kwargs
---
salt/modules/zypperpkg.py | 2 +-
tests/unit/modules/test_zypperpkg.py | 27 +++++++++++++++++++++++++++
2 files changed, 28 insertions(+), 1 deletion(-)
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index 582caffb59..3760b525e7 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -2557,7 +2557,7 @@ def download(*packages, **kwargs):
)
-def list_downloaded(root=None):
+def list_downloaded(root=None, **kwargs):
'''
.. versionadded:: 2017.7.0
diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py
index 3a6466f061..12c22bfcb2 100644
--- a/tests/unit/modules/test_zypperpkg.py
+++ b/tests/unit/modules/test_zypperpkg.py
@@ -767,6 +767,33 @@ Repository 'DUMMY' not found by its alias, number, or URI.
self.assertEqual(len(list_patches), 3)
self.assertDictEqual(list_patches, PATCHES_RET)
+ @patch('salt.utils.path.os_walk', MagicMock(return_value=[('test', 'test', 'test')]))
+ @patch('os.path.getsize', MagicMock(return_value=123456))
+ @patch('os.path.getctime', MagicMock(return_value=1234567890.123456))
+ @patch('fnmatch.filter', MagicMock(return_value=['/var/cache/zypper/packages/foo/bar/test_package.rpm']))
+ def test_list_downloaded_with_kwargs(self):
+ '''
+ Test downloaded packages listing.
+
+ :return:
+ '''
+ DOWNLOADED_RET = {
+ 'test-package': {
+ '1.0': {
+ 'path': '/var/cache/zypper/packages/foo/bar/test_package.rpm',
+ 'size': 123456,
+ 'creation_date_time_t': 1234567890,
+ 'creation_date_time': '2009-02-13T23:31:30',
+ }
+ }
+ }
+
+ with patch.dict(zypper.__salt__, {'lowpkg.bin_pkg_info': MagicMock(return_value={'name': 'test-package',
+ 'version': '1.0'})}):
+ list_downloaded = zypper.list_downloaded(kw1=True, kw2=False)
+ self.assertEqual(len(list_downloaded), 1)
+ self.assertDictEqual(list_downloaded, DOWNLOADED_RET)
+
@patch('salt.utils.path.os_walk', MagicMock(return_value=[('test', 'test', 'test')]))
@patch('os.path.getsize', MagicMock(return_value=123456))
@patch('os.path.getctime', MagicMock(return_value=1234567890.123456))
--
2.16.4
++++++ apply-patch-from-upstream-to-support-python-3.8.patch ++++++
>From b8226467e665650a0587b8fd64242faefb805e13 Mon Sep 17 00:00:00 2001
From: Steve Kowalik <steven(a)wedontsleep.org>
Date: Mon, 17 Feb 2020 15:34:00 +1100
Subject: [PATCH] Apply patch from upstream to support Python 3.8
Apply saltstack/salt#56031 to support Python 3.8, which removed a
deprecated module and changed some behaviour. Add a {Build,}Requires on
python-distro, since it is now required.
---
pkg/suse/salt.spec | 2 ++
salt/config/__init__.py | 4 +++-
salt/grains/core.py | 16 ++++++++--------
salt/renderers/stateconf.py | 8 ++++----
tests/unit/modules/test_virt.py | 2 +-
5 files changed, 18 insertions(+), 14 deletions(-)
diff --git a/pkg/suse/salt.spec b/pkg/suse/salt.spec
index e3e678af3b..0f6a9bc012 100644
--- a/pkg/suse/salt.spec
+++ b/pkg/suse/salt.spec
@@ -62,6 +62,7 @@ BuildRequires: python-psutil
BuildRequires: python-requests >= 1.0.0
BuildRequires: python-tornado >= 4.2.1
BuildRequires: python-yaml
+BuildRequires: python-distro
# requirements/opt.txt (not all)
# BuildRequires: python-MySQL-python
# BuildRequires: python-timelib
@@ -112,6 +113,7 @@ Requires: python-psutil
Requires: python-requests >= 1.0.0
Requires: python-tornado >= 4.2.1
Requires: python-yaml
+Requires: python-distro
%if 0%{?suse_version}
# requirements/opt.txt (not all)
Recommends: python-MySQL-python
diff --git a/salt/config/__init__.py b/salt/config/__init__.py
index 0ebe1181dd..f484d94e7e 100644
--- a/salt/config/__init__.py
+++ b/salt/config/__init__.py
@@ -3196,7 +3196,9 @@ def apply_cloud_providers_config(overrides, defaults=None):
# Merge provided extends
keep_looping = False
for alias, entries in six.iteritems(providers.copy()):
- for driver, details in six.iteritems(entries):
+ for driver in list(six.iterkeys(entries)):
+ # Don't use iteritems, because the values of the dictionary will be changed
+ details = entries[driver]
if 'extends' not in details:
# Extends resolved or non existing, continue!
diff --git a/salt/grains/core.py b/salt/grains/core.py
index f410985198..358b66fdb0 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -40,20 +40,20 @@ except ImportError:
__proxyenabled__ = ['*']
__FQDN__ = None
-# Extend the default list of supported distros. This will be used for the
-# /etc/DISTRO-release checking that is part of linux_distribution()
-from platform import _supported_dists
-_supported_dists += ('arch', 'mageia', 'meego', 'vmware', 'bluewhite64',
- 'slamd64', 'ovs', 'system', 'mint', 'oracle', 'void')
-
# linux_distribution deprecated in py3.7
try:
from platform import linux_distribution as _deprecated_linux_distribution
+ # Extend the default list of supported distros. This will be used for the
+ # /etc/DISTRO-release checking that is part of linux_distribution()
+ from platform import _supported_dists
+ _supported_dists += ('arch', 'mageia', 'meego', 'vmware', 'bluewhite64',
+ 'slamd64', 'ovs', 'system', 'mint', 'oracle', 'void')
+
def linux_distribution(**kwargs):
with warnings.catch_warnings():
warnings.simplefilter("ignore")
- return _deprecated_linux_distribution(**kwargs)
+ return _deprecated_linux_distribution(supported_dists=_supported_dists, **kwargs)
except ImportError:
from distro import linux_distribution
@@ -1976,7 +1976,7 @@ def os_data():
)
(osname, osrelease, oscodename) = \
[x.strip('"').strip("'") for x in
- linux_distribution(supported_dists=_supported_dists)]
+ linux_distribution()]
# Try to assign these three names based on the lsb info, they tend to
# be more accurate than what python gets from /etc/DISTRO-release.
# It's worth noting that Ubuntu has patched their Python distribution
diff --git a/salt/renderers/stateconf.py b/salt/renderers/stateconf.py
index cfce9e6926..5c8a8322ed 100644
--- a/salt/renderers/stateconf.py
+++ b/salt/renderers/stateconf.py
@@ -224,10 +224,10 @@ def render(input, saltenv='base', sls='', argline='', **kws):
tmplctx = STATE_CONF.copy()
if tmplctx:
prefix = sls + '::'
- for k in six.iterkeys(tmplctx): # iterate over a copy of keys
- if k.startswith(prefix):
- tmplctx[k[len(prefix):]] = tmplctx[k]
- del tmplctx[k]
+ tmplctx = {
+ k[len(prefix):] if k.startswith(prefix) else k: v
+ for k, v in six.iteritems(tmplctx)
+ }
else:
tmplctx = {}
diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py
index 94372c6d72..d762dcc479 100644
--- a/tests/unit/modules/test_virt.py
+++ b/tests/unit/modules/test_virt.py
@@ -1256,7 +1256,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin):
<alias name='net1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
</interface>
- <graphics type='spice' port='5900' autoport='yes' listen='127.0.0.1'>
+ <graphics type='spice' listen='127.0.0.1' autoport='yes'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<video>
--
2.16.4
++++++ async-batch-implementation.patch ++++++
++++ 942 lines (skipped)
++++++ avoid-excessive-syslogging-by-watchdog-cronjob-58.patch ++++++
>From 638ad2baa04e96f744f97c97f3840151937e8aac Mon Sep 17 00:00:00 2001
From: Hubert Mantel <mantel(a)suse.de>
Date: Mon, 27 Nov 2017 13:55:13 +0100
Subject: [PATCH] avoid excessive syslogging by watchdog cronjob (#58)
---
pkg/suse/salt-minion | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/pkg/suse/salt-minion b/pkg/suse/salt-minion
index 2e418094ed..73a91ebd62 100755
--- a/pkg/suse/salt-minion
+++ b/pkg/suse/salt-minion
@@ -55,7 +55,7 @@ WATCHDOG_CRON="/etc/cron.d/salt-minion"
set_watchdog() {
if [ ! -f $WATCHDOG_CRON ]; then
- echo -e '* * * * * root /usr/bin/salt-daemon-watcher --with-init\n' > $WATCHDOG_CRON
+ echo -e '-* * * * * root /usr/bin/salt-daemon-watcher --with-init\n' > $WATCHDOG_CRON
# Kick the watcher for 1 minute immediately, because cron will wake up only afterwards
/usr/bin/salt-daemon-watcher --with-init & disown
fi
--
2.16.4
++++++ avoid-has_docker-true-if-import-messes-with-salt.uti.patch ++++++
>From 8ccd2d94da88742401451016b6ea351676e83ffa Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Thu, 28 May 2020 16:38:04 +0100
Subject: [PATCH] Avoid HAS_DOCKER true if import messes with
salt.utils.docker (bsc#1172075)
---
salt/modules/swarm.py | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/salt/modules/swarm.py b/salt/modules/swarm.py
index ea327ce640040bdbd7e7077bc6bbb59a9f0ade4a..6f16f41ece01738f3a04d11211fa5e96cd8155b4 100644
--- a/salt/modules/swarm.py
+++ b/salt/modules/swarm.py
@@ -30,9 +30,13 @@ from __future__ import absolute_import, unicode_literals, print_function
# Import Salt libs
import salt.utils.json
+HAS_DOCKER = False
+
try:
import docker
- HAS_DOCKER = True
+
+ if hasattr(docker, "from_env"):
+ HAS_DOCKER = True
except ImportError:
HAS_DOCKER = False
--
2.23.0
++++++ avoid-traceback-when-http.query-request-cannot-be-pe.patch ++++++
>From e45658e074fbf8c038816dc56b86c3daf33d6ebc Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Mon, 29 Jul 2019 11:17:53 +0100
Subject: [PATCH] Avoid traceback when http.query request cannot be
performed (bsc#1128554)
Improve error logging when http.query cannot be performed
---
salt/utils/http.py | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/salt/utils/http.py b/salt/utils/http.py
index dee0563679..c2fdffb266 100644
--- a/salt/utils/http.py
+++ b/salt/utils/http.py
@@ -580,11 +580,13 @@ def query(url,
except salt.ext.tornado.httpclient.HTTPError as exc:
ret['status'] = exc.code
ret['error'] = six.text_type(exc)
+ log.error("Cannot perform 'http.query': {0} - {1}".format(url_full, ret['error']))
return ret
- except socket.gaierror as exc:
+ except (socket.herror, socket.error, socket.timeout, socket.gaierror) as exc:
if status is True:
ret['status'] = 0
ret['error'] = six.text_type(exc)
+ log.error("Cannot perform 'http.query': {0} - {1}".format(url_full, ret['error']))
return ret
if stream is True or handle is True:
--
2.16.4
++++++ batch-async-catch-exceptions-and-safety-unregister-a.patch ++++++
>From c5edf396ffd66b6ac1479aa01367aae3eff7683d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Fri, 28 Feb 2020 15:11:53 +0000
Subject: [PATCH] Batch Async: Catch exceptions and safety unregister and
close instances
---
salt/cli/batch_async.py | 156 +++++++++++++++++++++++-----------------
1 file changed, 89 insertions(+), 67 deletions(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index da069b64bd..b8f272ed67 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -13,7 +13,6 @@ import salt.client
# pylint: enable=import-error,no-name-in-module,redefined-builtin
import logging
-import fnmatch
log = logging.getLogger(__name__)
@@ -104,22 +103,25 @@ class BatchAsync(object):
def __event_handler(self, raw):
if not self.event:
return
- mtag, data = self.event.unpack(raw, self.event.serial)
- for (pattern, op) in self.patterns:
- if mtag.startswith(pattern[:-1]):
- minion = data['id']
- if op == 'ping_return':
- self.minions.add(minion)
- if self.targeted_minions == self.minions:
- self.event.io_loop.spawn_callback(self.start_batch)
- elif op == 'find_job_return':
- if data.get("return", None):
- self.find_job_returned.add(minion)
- elif op == 'batch_run':
- if minion in self.active:
- self.active.remove(minion)
- self.done_minions.add(minion)
- self.event.io_loop.spawn_callback(self.schedule_next)
+ try:
+ mtag, data = self.event.unpack(raw, self.event.serial)
+ for (pattern, op) in self.patterns:
+ if mtag.startswith(pattern[:-1]):
+ minion = data['id']
+ if op == 'ping_return':
+ self.minions.add(minion)
+ if self.targeted_minions == self.minions:
+ self.event.io_loop.spawn_callback(self.start_batch)
+ elif op == 'find_job_return':
+ if data.get("return", None):
+ self.find_job_returned.add(minion)
+ elif op == 'batch_run':
+ if minion in self.active:
+ self.active.remove(minion)
+ self.done_minions.add(minion)
+ self.event.io_loop.spawn_callback(self.schedule_next)
+ except Exception as ex:
+ log.error("Exception occured while processing event: {}".format(ex))
def _get_next(self):
to_run = self.minions.difference(
@@ -146,54 +148,59 @@ class BatchAsync(object):
if timedout_minions:
self.schedule_next()
- if running:
+ if self.event and running:
self.find_job_returned = self.find_job_returned.difference(running)
self.event.io_loop.spawn_callback(self.find_job, running)
@tornado.gen.coroutine
def find_job(self, minions):
- not_done = minions.difference(self.done_minions).difference(self.timedout_minions)
-
- if not_done:
- jid = self.jid_gen()
- find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid)
- self.patterns.add((find_job_return_pattern, "find_job_return"))
- self.event.subscribe(find_job_return_pattern, match_type='glob')
-
- ret = yield self.local.run_job_async(
- not_done,
- 'saltutil.find_job',
- [self.batch_jid],
- 'list',
- gather_job_timeout=self.opts['gather_job_timeout'],
- jid=jid,
- **self.eauth)
- yield tornado.gen.sleep(self.opts['gather_job_timeout'])
- self.event.io_loop.spawn_callback(
- self.check_find_job,
- not_done,
- jid)
+ if self.event:
+ not_done = minions.difference(self.done_minions).difference(self.timedout_minions)
+ try:
+ if not_done:
+ jid = self.jid_gen()
+ find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid)
+ self.patterns.add((find_job_return_pattern, "find_job_return"))
+ self.event.subscribe(find_job_return_pattern, match_type='glob')
+ ret = yield self.local.run_job_async(
+ not_done,
+ 'saltutil.find_job',
+ [self.batch_jid],
+ 'list',
+ gather_job_timeout=self.opts['gather_job_timeout'],
+ jid=jid,
+ **self.eauth)
+ yield tornado.gen.sleep(self.opts['gather_job_timeout'])
+ if self.event:
+ self.event.io_loop.spawn_callback(
+ self.check_find_job,
+ not_done,
+ jid)
+ except Exception as ex:
+ log.error("Exception occured handling batch async: {}. Aborting execution.".format(ex))
+ self.close_safe()
@tornado.gen.coroutine
def start(self):
- self.__set_event_handler()
- ping_return = yield self.local.run_job_async(
- self.opts['tgt'],
- 'test.ping',
- [],
- self.opts.get(
- 'selected_target_option',
- self.opts.get('tgt_type', 'glob')
- ),
- gather_job_timeout=self.opts['gather_job_timeout'],
- jid=self.ping_jid,
- metadata=self.metadata,
- **self.eauth)
- self.targeted_minions = set(ping_return['minions'])
- #start batching even if not all minions respond to ping
- yield tornado.gen.sleep(self.batch_presence_ping_timeout or self.opts['gather_job_timeout'])
- self.event.io_loop.spawn_callback(self.start_batch)
-
+ if self.event:
+ self.__set_event_handler()
+ ping_return = yield self.local.run_job_async(
+ self.opts['tgt'],
+ 'test.ping',
+ [],
+ self.opts.get(
+ 'selected_target_option',
+ self.opts.get('tgt_type', 'glob')
+ ),
+ gather_job_timeout=self.opts['gather_job_timeout'],
+ jid=self.ping_jid,
+ metadata=self.metadata,
+ **self.eauth)
+ self.targeted_minions = set(ping_return['minions'])
+ #start batching even if not all minions respond to ping
+ yield tornado.gen.sleep(self.batch_presence_ping_timeout or self.opts['gather_job_timeout'])
+ if self.event:
+ self.event.io_loop.spawn_callback(self.start_batch)
@tornado.gen.coroutine
def start_batch(self):
@@ -206,7 +213,8 @@ class BatchAsync(object):
"metadata": self.metadata
}
ret = self.event.fire_event(data, "salt/batch/{0}/start".format(self.batch_jid))
- self.event.io_loop.spawn_callback(self.run_next)
+ if self.event:
+ self.event.io_loop.spawn_callback(self.run_next)
@tornado.gen.coroutine
def end_batch(self):
@@ -221,11 +229,21 @@ class BatchAsync(object):
"metadata": self.metadata
}
self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid))
- for (pattern, label) in self.patterns:
- if label in ["ping_return", "batch_run"]:
- self.event.unsubscribe(pattern, match_type='glob')
- del self
- gc.collect()
+
+ # release to the IOLoop to allow the event to be published
+ # before closing batch async execution
+ yield tornado.gen.sleep(1)
+ self.close_safe()
+
+ def close_safe(self):
+ for (pattern, label) in self.patterns:
+ self.event.unsubscribe(pattern, match_type='glob')
+ self.event.remove_event_handler(self.__event_handler)
+ self.event = None
+ self.local = None
+ self.ioloop = None
+ del self
+ gc.collect()
@tornado.gen.coroutine
def schedule_next(self):
@@ -233,7 +251,8 @@ class BatchAsync(object):
self.scheduled = True
# call later so that we maybe gather more returns
yield tornado.gen.sleep(self.batch_delay)
- self.event.io_loop.spawn_callback(self.run_next)
+ if self.event:
+ self.event.io_loop.spawn_callback(self.run_next)
@tornado.gen.coroutine
def run_next(self):
@@ -254,17 +273,20 @@ class BatchAsync(object):
metadata=self.metadata)
yield tornado.gen.sleep(self.opts['timeout'])
- self.event.io_loop.spawn_callback(self.find_job, set(next_batch))
+
+ # The batch can be done already at this point, which means no self.event
+ if self.event:
+ self.event.io_loop.spawn_callback(self.find_job, set(next_batch))
except Exception as ex:
- log.error("Error in scheduling next batch: %s", ex)
+ log.error("Error in scheduling next batch: %s. Aborting execution", ex)
self.active = self.active.difference(next_batch)
+ self.close_safe()
else:
yield self.end_batch()
gc.collect()
def __del__(self):
self.local = None
- self.event.remove_event_handler(self.__event_handler)
self.event = None
self.ioloop = None
gc.collect()
--
2.23.0
++++++ batch.py-avoid-exception-when-minion-does-not-respon.patch ++++++
>From bbd2e622f7e165a6e16fd5edf5f4596764748208 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 5 Jun 2019 15:15:04 +0100
Subject: [PATCH] batch.py: avoid exception when minion does not respond
(bsc#1135507)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
We have several issues reporting that salt is throwing exception when
the minion does not respond. This change avoid the exception adding a
default data to the minion when it fails to respond. This patch based
on the patch suggested by @roskens.
Issues #46876 #48509 #50238
bsc#1135507
Signed-off-by: José Guilherme Vanz <jguilhermevanz(a)suse.com>
---
salt/cli/batch.py | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/salt/cli/batch.py b/salt/cli/batch.py
index 67f03c8a45..10fc81a5f4 100644
--- a/salt/cli/batch.py
+++ b/salt/cli/batch.py
@@ -318,6 +318,11 @@ class Batch(object):
if self.opts.get('failhard') and data['retcode'] > 0:
failhard = True
+ # avoid an exception if the minion does not respond.
+ if data.get("failed") is True:
+ log.debug('Minion %s failed to respond: data=%s', minion, data)
+ data = {'ret': 'Minion did not return. [Failed]', 'retcode': salt.defaults.exitcodes.EX_GENERIC}
+
if self.opts.get('raw'):
ret[minion] = data
yield data
--
2.16.4
++++++ batch_async-avoid-using-fnmatch-to-match-event-217.patch ++++++
>From bd20cd2655a1141fe9ea892e974e40988c3fb83c Mon Sep 17 00:00:00 2001
From: Silvio Moioli <smoioli(a)suse.de>
Date: Mon, 2 Mar 2020 11:23:59 +0100
Subject: [PATCH] batch_async: avoid using fnmatch to match event (#217)
---
salt/cli/batch_async.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index c4545e3ebc..da069b64bd 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -106,7 +106,7 @@ class BatchAsync(object):
return
mtag, data = self.event.unpack(raw, self.event.serial)
for (pattern, op) in self.patterns:
- if fnmatch.fnmatch(mtag, pattern):
+ if mtag.startswith(pattern[:-1]):
minion = data['id']
if op == 'ping_return':
self.minions.add(minion)
--
2.23.0
++++++ calculate-fqdns-in-parallel-to-avoid-blockings-bsc-1.patch ++++++
>From 07f5a1d984b5a86c24620503f5e373ea0f11484a Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Fri, 12 Apr 2019 16:47:03 +0100
Subject: [PATCH] Calculate FQDNs in parallel to avoid blockings
(bsc#1129079)
Fix pylint issue
---
salt/grains/core.py | 31 ++++++++++++++++++++++++++-----
1 file changed, 26 insertions(+), 5 deletions(-)
diff --git a/salt/grains/core.py b/salt/grains/core.py
index 309e4c9c4a..4600f055dd 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -20,12 +20,15 @@ import platform
import logging
import locale
import uuid
+import time
import zlib
from errno import EACCES, EPERM
import datetime
import warnings
import time
+from multiprocessing.dummy import Pool as ThreadPool
+
# pylint: disable=import-error
try:
import dateutil.tz
@@ -2275,13 +2278,10 @@ def fqdns():
grains = {}
fqdns = set()
- addresses = salt.utils.network.ip_addrs(include_loopback=False, interface_data=_get_interfaces())
- addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, interface_data=_get_interfaces()))
- err_message = 'Exception during resolving address: %s'
- for ip in addresses:
+ def _lookup_fqdn(ip):
try:
name, aliaslist, addresslist = socket.gethostbyaddr(ip)
- fqdns.update([socket.getfqdn(name)] + [als for als in aliaslist if salt.utils.network.is_fqdn(als)])
+ return [socket.getfqdn(name)] + [als for als in aliaslist if salt.utils.network.is_fqdn(als)]
except socket.herror as err:
if err.errno in (0, HOST_NOT_FOUND, NO_DATA):
# No FQDN for this IP address, so we don't need to know this all the time.
@@ -2291,6 +2291,27 @@ def fqdns():
except (socket.error, socket.gaierror, socket.timeout) as err:
log.error(err_message, ip, err)
+ start = time.time()
+
+ addresses = salt.utils.network.ip_addrs(include_loopback=False, interface_data=_get_interfaces())
+ addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, interface_data=_get_interfaces()))
+ err_message = 'Exception during resolving address: %s'
+
+ # Create a ThreadPool to process the underlying calls to 'socket.gethostbyaddr' in parallel.
+ # This avoid blocking the execution when the "fqdn" is not defined for certains IP addresses, which was causing
+ # that "socket.timeout" was reached multiple times secuencially, blocking execution for several seconds.
+ pool = ThreadPool(8)
+ results = pool.map(_lookup_fqdn, addresses)
+ pool.close()
+ pool.join()
+
+ for item in results:
+ if item:
+ fqdns.update(item)
+
+ elapsed = time.time() - start
+ log.debug('Elapsed time getting FQDNs: {} seconds'.format(elapsed))
+
return {"fqdns": sorted(list(fqdns))}
--
2.16.4
++++++ changed-imports-to-vendored-tornado.patch ++++++
>From 0cf1a655aa9353b22ae011e492a33aa52d780f83 Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Tue, 10 Mar 2020 14:02:17 +0100
Subject: [PATCH] Changed imports to vendored Tornado
---
salt/cli/batch_async.py | 26 ++++++++++++------------
salt/master.py | 2 +-
salt/transport/ipc.py | 4 ++--
tests/unit/cli/test_batch_async.py | 32 +++++++++++++++---------------
4 files changed, 32 insertions(+), 32 deletions(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index b8f272ed67..08eeb34f1c 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -6,7 +6,7 @@ Execute a job on the targeted minions by using a moving window of fixed size `ba
# Import python libs
from __future__ import absolute_import, print_function, unicode_literals
import gc
-import tornado
+import salt.ext.tornado
# Import salt libs
import salt.client
@@ -50,7 +50,7 @@ class BatchAsync(object):
}
'''
def __init__(self, parent_opts, jid_gen, clear_load):
- ioloop = tornado.ioloop.IOLoop.current()
+ ioloop = salt.ext.tornado.ioloop.IOLoop.current()
self.local = salt.client.get_local_client(parent_opts['conf_file'], io_loop=ioloop)
if 'gather_job_timeout' in clear_load['kwargs']:
clear_load['gather_job_timeout'] = clear_load['kwargs'].pop('gather_job_timeout')
@@ -152,7 +152,7 @@ class BatchAsync(object):
self.find_job_returned = self.find_job_returned.difference(running)
self.event.io_loop.spawn_callback(self.find_job, running)
- @tornado.gen.coroutine
+ @salt.ext.tornado.gen.coroutine
def find_job(self, minions):
if self.event:
not_done = minions.difference(self.done_minions).difference(self.timedout_minions)
@@ -170,7 +170,7 @@ class BatchAsync(object):
gather_job_timeout=self.opts['gather_job_timeout'],
jid=jid,
**self.eauth)
- yield tornado.gen.sleep(self.opts['gather_job_timeout'])
+ yield salt.ext.tornado.gen.sleep(self.opts['gather_job_timeout'])
if self.event:
self.event.io_loop.spawn_callback(
self.check_find_job,
@@ -180,7 +180,7 @@ class BatchAsync(object):
log.error("Exception occured handling batch async: {}. Aborting execution.".format(ex))
self.close_safe()
- @tornado.gen.coroutine
+ @salt.ext.tornado.gen.coroutine
def start(self):
if self.event:
self.__set_event_handler()
@@ -198,11 +198,11 @@ class BatchAsync(object):
**self.eauth)
self.targeted_minions = set(ping_return['minions'])
#start batching even if not all minions respond to ping
- yield tornado.gen.sleep(self.batch_presence_ping_timeout or self.opts['gather_job_timeout'])
+ yield salt.ext.tornado.gen.sleep(self.batch_presence_ping_timeout or self.opts['gather_job_timeout'])
if self.event:
self.event.io_loop.spawn_callback(self.start_batch)
- @tornado.gen.coroutine
+ @salt.ext.tornado.gen.coroutine
def start_batch(self):
if not self.initialized:
self.batch_size = get_bnum(self.opts, self.minions, True)
@@ -216,7 +216,7 @@ class BatchAsync(object):
if self.event:
self.event.io_loop.spawn_callback(self.run_next)
- @tornado.gen.coroutine
+ @salt.ext.tornado.gen.coroutine
def end_batch(self):
left = self.minions.symmetric_difference(self.done_minions.union(self.timedout_minions))
if not left and not self.ended:
@@ -232,7 +232,7 @@ class BatchAsync(object):
# release to the IOLoop to allow the event to be published
# before closing batch async execution
- yield tornado.gen.sleep(1)
+ yield salt.ext.tornado.gen.sleep(1)
self.close_safe()
def close_safe(self):
@@ -245,16 +245,16 @@ class BatchAsync(object):
del self
gc.collect()
- @tornado.gen.coroutine
+ @salt.ext.tornado.gen.coroutine
def schedule_next(self):
if not self.scheduled:
self.scheduled = True
# call later so that we maybe gather more returns
- yield tornado.gen.sleep(self.batch_delay)
+ yield salt.ext.tornado.gen.sleep(self.batch_delay)
if self.event:
self.event.io_loop.spawn_callback(self.run_next)
- @tornado.gen.coroutine
+ @salt.ext.tornado.gen.coroutine
def run_next(self):
self.scheduled = False
next_batch = self._get_next()
@@ -272,7 +272,7 @@ class BatchAsync(object):
jid=self.batch_jid,
metadata=self.metadata)
- yield tornado.gen.sleep(self.opts['timeout'])
+ yield salt.ext.tornado.gen.sleep(self.opts['timeout'])
# The batch can be done already at this point, which means no self.event
if self.event:
diff --git a/salt/master.py b/salt/master.py
index 3abf7ae60b..3a9d12999d 100644
--- a/salt/master.py
+++ b/salt/master.py
@@ -2049,7 +2049,7 @@ class ClearFuncs(object):
functools.partial(self._prep_jid, clear_load, {}),
batch_load
)
- ioloop = tornado.ioloop.IOLoop.current()
+ ioloop = salt.ext.tornado.ioloop.IOLoop.current()
ioloop.add_callback(batch.start)
return {
diff --git a/salt/transport/ipc.py b/salt/transport/ipc.py
index d2b295a633..33ee3d4182 100644
--- a/salt/transport/ipc.py
+++ b/salt/transport/ipc.py
@@ -697,7 +697,7 @@ class IPCMessageSubscriber(IPCClient):
for callback in self.callbacks:
self.io_loop.spawn_callback(callback, raw)
- @tornado.gen.coroutine
+ @salt.ext.tornado.gen.coroutine
def read_async(self):
'''
Asynchronously read messages and invoke a callback when they are ready.
@@ -712,7 +712,7 @@ class IPCMessageSubscriber(IPCClient):
yield salt.ext.tornado.gen.sleep(1)
except Exception as exc: # pylint: disable=broad-except
log.error('Exception occurred while Subscriber connecting: %s', exc)
- yield tornado.gen.sleep(1)
+ yield salt.ext.tornado.gen.sleep(1)
yield self._read(None, self.__run_callbacks)
def close(self):
diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py
index e1ce60859b..635dc689a8 100644
--- a/tests/unit/cli/test_batch_async.py
+++ b/tests/unit/cli/test_batch_async.py
@@ -5,8 +5,8 @@ from __future__ import absolute_import
# Import Salt Libs
from salt.cli.batch_async import BatchAsync
-import tornado
-from tornado.testing import AsyncTestCase
+import salt.ext.tornado
+from salt.ext.tornado.testing import AsyncTestCase
from tests.support.unit import skipIf, TestCase
from tests.support.mock import MagicMock, patch, NO_MOCK, NO_MOCK_REASON
@@ -59,10 +59,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.batch.start_batch()
self.assertEqual(self.batch.batch_size, 2)
- @tornado.testing.gen_test
+ @salt.ext.tornado.testing.gen_test
def test_batch_start_on_batch_presence_ping_timeout(self):
self.batch.event = MagicMock()
- future = tornado.gen.Future()
+ future = salt.ext.tornado.gen.Future()
future.set_result({'minions': ['foo', 'bar']})
self.batch.local.run_job_async.return_value = future
ret = self.batch.start()
@@ -78,10 +78,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
# assert targeted_minions == all minions matched by tgt
self.assertEqual(self.batch.targeted_minions, set(['foo', 'bar']))
- @tornado.testing.gen_test
+ @salt.ext.tornado.testing.gen_test
def test_batch_start_on_gather_job_timeout(self):
self.batch.event = MagicMock()
- future = tornado.gen.Future()
+ future = salt.ext.tornado.gen.Future()
future.set_result({'minions': ['foo', 'bar']})
self.batch.local.run_job_async.return_value = future
self.batch.batch_presence_ping_timeout = None
@@ -109,7 +109,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
)
)
- @tornado.testing.gen_test
+ @salt.ext.tornado.testing.gen_test
def test_start_batch_calls_next(self):
self.batch.run_next = MagicMock(return_value=MagicMock())
self.batch.event = MagicMock()
@@ -165,14 +165,14 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.assertEqual(
len(event.remove_event_handler.mock_calls), 1)
- @tornado.testing.gen_test
+ @salt.ext.tornado.testing.gen_test
def test_batch_next(self):
self.batch.event = MagicMock()
self.batch.opts['fun'] = 'my.fun'
self.batch.opts['arg'] = []
self.batch._get_next = MagicMock(return_value={'foo', 'bar'})
self.batch.batch_size = 2
- future = tornado.gen.Future()
+ future = salt.ext.tornado.gen.Future()
future.set_result({'minions': ['foo', 'bar']})
self.batch.local.run_job_async.return_value = future
self.batch.run_next()
@@ -284,38 +284,38 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.batch._BatchAsync__event_handler(MagicMock())
self.assertEqual(self.batch.find_job_returned, {'foo'})
- @tornado.testing.gen_test
+ @salt.ext.tornado.testing.gen_test
def test_batch_run_next_end_batch_when_no_next(self):
self.batch.end_batch = MagicMock()
self.batch._get_next = MagicMock(return_value={})
self.batch.run_next()
self.assertEqual(len(self.batch.end_batch.mock_calls), 1)
- @tornado.testing.gen_test
+ @salt.ext.tornado.testing.gen_test
def test_batch_find_job(self):
self.batch.event = MagicMock()
- future = tornado.gen.Future()
+ future = salt.ext.tornado.gen.Future()
future.set_result({})
self.batch.local.run_job_async.return_value = future
self.batch.minions = set(['foo', 'bar'])
self.batch.jid_gen = MagicMock(return_value="1234")
- tornado.gen.sleep = MagicMock(return_value=future)
+ salt.ext.tornado.gen.sleep = MagicMock(return_value=future)
self.batch.find_job({'foo', 'bar'})
self.assertEqual(
self.batch.event.io_loop.spawn_callback.call_args[0],
(self.batch.check_find_job, {'foo', 'bar'}, "1234")
)
- @tornado.testing.gen_test
+ @salt.ext.tornado.testing.gen_test
def test_batch_find_job_with_done_minions(self):
self.batch.done_minions = {'bar'}
self.batch.event = MagicMock()
- future = tornado.gen.Future()
+ future = salt.ext.tornado.gen.Future()
future.set_result({})
self.batch.local.run_job_async.return_value = future
self.batch.minions = set(['foo', 'bar'])
self.batch.jid_gen = MagicMock(return_value="1234")
- tornado.gen.sleep = MagicMock(return_value=future)
+ salt.ext.tornado.gen.sleep = MagicMock(return_value=future)
self.batch.find_job({'foo', 'bar'})
self.assertEqual(
self.batch.event.io_loop.spawn_callback.call_args[0],
--
2.23.0
++++++ debian-info_installed-compatibility-50453.patch ++++++
++++ 656 lines (skipped)
++++++ decide-if-the-source-should-be-actually-skipped.patch ++++++
>From 615a8f8dfa8ef12eeb4c387e48309cc466b8597d Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Tue, 4 Dec 2018 16:39:08 +0100
Subject: [PATCH] Decide if the source should be actually skipped
---
salt/modules/aptpkg.py | 23 ++++++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py
index 4ec9158476..3b0d8423db 100644
--- a/salt/modules/aptpkg.py
+++ b/salt/modules/aptpkg.py
@@ -1620,6 +1620,27 @@ def list_repo_pkgs(*args, **kwargs): # pylint: disable=unused-import
return ret
+def _skip_source(source):
+ '''
+ Decide to skip source or not.
+
+ :param source:
+ :return:
+ '''
+ if source.invalid:
+ if source.uri and source.type and source.type in ("deb", "deb-src", "rpm", "rpm-src"):
+ pieces = source.mysplit(source.line)
+ if pieces[1].strip()[0] == "[":
+ options = pieces.pop(1).strip("[]").split()
+ if len(options) > 0:
+ log.debug("Source %s will be included although is marked invalid", source.uri)
+ return False
+ return True
+ else:
+ return True
+ return False
+
+
def list_repos():
'''
Lists all repos in the sources.list (and sources.lists.d) files
@@ -1635,7 +1656,7 @@ def list_repos():
repos = {}
sources = sourceslist.SourcesList()
for source in sources.list:
- if source.invalid:
+ if _skip_source(source):
continue
repo = {}
repo['file'] = source.file
--
2.16.4
++++++ do-not-break-repo-files-with-multiple-line-values-on.patch ++++++
>From f81a5b92d691c1d511a814f9344104dd37466bc3 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 29 May 2019 11:03:16 +0100
Subject: [PATCH] Do not break repo files with multiple line values on
yumpkg (bsc#1135360)
---
tests/integration/modules/test_pkg.py | 48 +++++++++++++++++++++++++++++++++++
1 file changed, 48 insertions(+)
diff --git a/tests/integration/modules/test_pkg.py b/tests/integration/modules/test_pkg.py
index e8374db2c0..61748f9477 100644
--- a/tests/integration/modules/test_pkg.py
+++ b/tests/integration/modules/test_pkg.py
@@ -182,6 +182,54 @@ class PkgModuleTest(ModuleCase, SaltReturnAssertsMixin):
if repo is not None:
self.run_function('pkg.del_repo', [repo])
+ def test_mod_del_repo_multiline_values(self):
+ '''
+ test modifying and deleting a software repository defined with multiline values
+ '''
+ os_grain = self.run_function('grains.item', ['os'])['os']
+ repo = None
+ try:
+ if os_grain in ['CentOS', 'RedHat', 'SUSE']:
+ my_baseurl = 'http://my.fake.repo/foo/bar/\n http://my.fake.repo.alt/foo/bar/'
+ expected_get_repo_baseurl = 'http://my.fake.repo/foo/bar/\nhttp://my.fake.repo.alt/foo/bar/'
+ major_release = int(
+ self.run_function(
+ 'grains.item',
+ ['osmajorrelease']
+ )['osmajorrelease']
+ )
+ repo = 'fakerepo'
+ name = 'Fake repo for RHEL/CentOS/SUSE'
+ baseurl = my_baseurl
+ gpgkey = 'https://my.fake.repo/foo/bar/MY-GPG-KEY.pub'
+ failovermethod = 'priority'
+ gpgcheck = 1
+ enabled = 1
+ ret = self.run_function(
+ 'pkg.mod_repo',
+ [repo],
+ name=name,
+ baseurl=baseurl,
+ gpgkey=gpgkey,
+ gpgcheck=gpgcheck,
+ enabled=enabled,
+ failovermethod=failovermethod,
+ )
+ # return data from pkg.mod_repo contains the file modified at
+ # the top level, so use next(iter(ret)) to get that key
+ self.assertNotEqual(ret, {})
+ repo_info = ret[next(iter(ret))]
+ self.assertIn(repo, repo_info)
+ self.assertEqual(repo_info[repo]['baseurl'], my_baseurl)
+ ret = self.run_function('pkg.get_repo', [repo])
+ self.assertEqual(ret['baseurl'], expected_get_repo_baseurl)
+ self.run_function('pkg.mod_repo', [repo])
+ ret = self.run_function('pkg.get_repo', [repo])
+ self.assertEqual(ret['baseurl'], expected_get_repo_baseurl)
+ finally:
+ if repo is not None:
+ self.run_function('pkg.del_repo', [repo])
+
@requires_salt_modules('pkg.owner')
def test_owner(self):
'''
--
2.16.4
++++++ do-not-crash-when-there-are-ipv6-established-connect.patch ++++++
>From bfee3a7c47786bb860663de97fca26725101f1d0 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Tue, 7 May 2019 15:33:51 +0100
Subject: [PATCH] Do not crash when there are IPv6 established
connections (bsc#1130784)
Add unit test for '_netlink_tool_remote_on'
---
salt/utils/network.py | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/salt/utils/network.py b/salt/utils/network.py
index 2ae2e213b7..307cab885f 100644
--- a/salt/utils/network.py
+++ b/salt/utils/network.py
@@ -1442,8 +1442,13 @@ def _netlink_tool_remote_on(port, which_end):
elif 'ESTAB' not in line:
continue
chunks = line.split()
+ local_host, local_port = chunks[3].rsplit(':', 1)
remote_host, remote_port = chunks[4].rsplit(':', 1)
+ if which_end == 'remote_port' and int(remote_port) != port:
+ continue
+ if which_end == 'local_port' and int(local_port) != port:
+ continue
remotes.add(remote_host.strip("[]"))
if valid is False:
--
2.23.0
++++++ do-not-load-pip-state-if-there-is-no-3rd-party-depen.patch ++++++
>From b1c96bdaec9723fd76a6dd5d72cf7fbfd681566f Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Fri, 21 Sep 2018 17:31:39 +0200
Subject: [PATCH] Do not load pip state if there is no 3rd party
dependencies
Safe import 3rd party dependency
---
salt/modules/pip.py | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/salt/modules/pip.py b/salt/modules/pip.py
index ffdb73aefa..ff0836c35f 100644
--- a/salt/modules/pip.py
+++ b/salt/modules/pip.py
@@ -82,7 +82,10 @@ from __future__ import absolute_import, print_function, unicode_literals
# Import python libs
import logging
import os
-import pkg_resources
+try:
+ import pkg_resources
+except ImportError:
+ pkg_resources = None
import re
import shutil
import sys
@@ -119,7 +122,12 @@ def __virtual__():
entire filesystem. If it's not installed in a conventional location, the
user is required to provide the location of pip each time it is used.
'''
- return 'pip'
+ if pkg_resources is None:
+ ret = False, 'Package dependency "pkg_resource" is missing'
+ else:
+ ret = 'pip'
+
+ return ret
def _clear_context(bin_env=None):
--
2.16.4
++++++ do-not-make-ansiblegate-to-crash-on-python3-minions.patch ++++++
>From 235cca81be2f64ed3feb48ed42bfa3f9196bff39 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Fri, 28 Jun 2019 15:17:56 +0100
Subject: [PATCH] Do not make ansiblegate to crash on Python3 minions
Fix pylint issues
Move MockTimedProc implementation to tests.support.mock
Add unit test for ansible caller
---
salt/modules/ansiblegate.py | 14 +++++++++---
tests/support/mock.py | 31 +++++++++++++++++++++++++
tests/unit/modules/test_ansiblegate.py | 41 ++++++++++++++++++++++++++++++++++
tests/unit/modules/test_cmdmod.py | 35 ++---------------------------
4 files changed, 85 insertions(+), 36 deletions(-)
diff --git a/salt/modules/ansiblegate.py b/salt/modules/ansiblegate.py
index 6b903c2b94..8e28fcafa3 100644
--- a/salt/modules/ansiblegate.py
+++ b/salt/modules/ansiblegate.py
@@ -147,6 +147,10 @@ class AnsibleModuleCaller(object):
:param kwargs: keywords to the module
:return:
'''
+ if six.PY3:
+ python_exec = 'python3'
+ else:
+ python_exec = 'python'
module = self._resolver.load_module(module)
if not hasattr(module, 'main'):
@@ -162,9 +166,13 @@ class AnsibleModuleCaller(object):
["echo", "{0}".format(js_args)],
stdout=subprocess.PIPE, timeout=self.timeout)
proc_out.run()
+ if six.PY3:
+ proc_out_stdout = proc_out.stdout.decode()
+ else:
+ proc_out_stdout = proc_out.stdout
proc_exc = salt.utils.timed_subprocess.TimedProc(
- ['python', module.__file__],
- stdin=proc_out.stdout, stdout=subprocess.PIPE, timeout=self.timeout)
+ [python_exec, module.__file__],
+ stdin=proc_out_stdout, stdout=subprocess.PIPE, timeout=self.timeout)
proc_exc.run()
try:
@@ -263,7 +271,7 @@ def help(module=None, *args):
description = doc.get('description') or ''
del doc['description']
ret['Description'] = description
- ret['Available sections on module "{}"'.format(module.__name__.replace('ansible.modules.', ''))] = doc.keys()
+ ret['Available sections on module "{}"'.format(module.__name__.replace('ansible.modules.', ''))] = [i for i in doc.keys()]
else:
for arg in args:
info = doc.get(arg)
diff --git a/tests/support/mock.py b/tests/support/mock.py
index 805a60377c..67ecb4838a 100644
--- a/tests/support/mock.py
+++ b/tests/support/mock.py
@@ -461,6 +461,37 @@ class MockOpen(object):
ret.extend(fh_.writelines_calls)
return ret
+class MockTimedProc(object):
+ '''
+ Class used as a stand-in for salt.utils.timed_subprocess.TimedProc
+ '''
+ class _Process(object):
+ '''
+ Used to provide a dummy "process" attribute
+ '''
+ def __init__(self, returncode=0, pid=12345):
+ self.returncode = returncode
+ self.pid = pid
+
+ def __init__(self, stdout=None, stderr=None, returncode=0, pid=12345):
+ if stdout is not None and not isinstance(stdout, bytes):
+ raise TypeError('Must pass stdout to MockTimedProc as bytes')
+ if stderr is not None and not isinstance(stderr, bytes):
+ raise TypeError('Must pass stderr to MockTimedProc as bytes')
+ self._stdout = stdout
+ self._stderr = stderr
+ self.process = self._Process(returncode=returncode, pid=pid)
+
+ def run(self):
+ pass
+
+ @property
+ def stdout(self):
+ return self._stdout
+
+ @property
+ def stderr(self):
+ return self._stderr
# reimplement mock_open to support multiple filehandles
mock_open = MockOpen
diff --git a/tests/unit/modules/test_ansiblegate.py b/tests/unit/modules/test_ansiblegate.py
index 5613a0e79b..b7b43efda4 100644
--- a/tests/unit/modules/test_ansiblegate.py
+++ b/tests/unit/modules/test_ansiblegate.py
@@ -29,11 +29,13 @@ from tests.support.unit import TestCase, skipIf
from tests.support.mock import (
patch,
MagicMock,
+ MockTimedProc,
)
import salt.modules.ansiblegate as ansible
import salt.utils.platform
from salt.exceptions import LoaderError
+from salt.ext import six
@skipIf(NO_PYTEST, False)
@@ -134,3 +136,42 @@ description:
'''
with patch('salt.modules.ansiblegate.ansible', None):
assert ansible.__virtual__() == 'ansible'
+
+ def test_ansible_module_call(self):
+ '''
+ Test Ansible module call from ansible gate module
+
+ :return:
+ '''
+
+ class Module(object):
+ '''
+ An ansible module mock.
+ '''
+ __name__ = 'one.two.three'
+ __file__ = 'foofile'
+
+ def main():
+ pass
+
+ ANSIBLE_MODULE_ARGS = '{"ANSIBLE_MODULE_ARGS": ["arg_1", {"kwarg1": "foobar"}]}'
+
+ proc = MagicMock(side_effect=[
+ MockTimedProc(
+ stdout=ANSIBLE_MODULE_ARGS.encode(),
+ stderr=None),
+ MockTimedProc(stdout='{"completed": true}'.encode(), stderr=None)
+ ])
+
+ with patch.object(ansible, '_resolver', self.resolver), \
+ patch.object(ansible._resolver, 'load_module', MagicMock(return_value=Module())):
+ _ansible_module_caller = ansible.AnsibleModuleCaller(ansible._resolver)
+ with patch('salt.utils.timed_subprocess.TimedProc', proc):
+ ret = _ansible_module_caller.call("one.two.three", "arg_1", kwarg1="foobar")
+ if six.PY3:
+ proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"kwarg1": "foobar", "_raw_params": "arg_1"}}'], stdout=-1, timeout=1200)
+ proc.assert_any_call(['python3', 'foofile'], stdin=ANSIBLE_MODULE_ARGS, stdout=-1, timeout=1200)
+ else:
+ proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"_raw_params": "arg_1", "kwarg1": "foobar"}}'], stdout=-1, timeout=1200)
+ proc.assert_any_call(['python', 'foofile'], stdin=ANSIBLE_MODULE_ARGS, stdout=-1, timeout=1200)
+ assert ret == {"completed": True, "timeout": 1200}
diff --git a/tests/unit/modules/test_cmdmod.py b/tests/unit/modules/test_cmdmod.py
index 8170a56b4e..f8fba59294 100644
--- a/tests/unit/modules/test_cmdmod.py
+++ b/tests/unit/modules/test_cmdmod.py
@@ -26,6 +26,7 @@ from tests.support.helpers import TstSuiteLoggingHandler
from tests.support.mock import (
mock_open,
Mock,
+ MockTimedProc,
MagicMock,
patch
)
@@ -36,39 +37,7 @@ MOCK_SHELL_FILE = '# List of acceptable shells\n' \
'/bin/bash\n'
-class MockTimedProc(object):
- '''
- Class used as a stand-in for salt.utils.timed_subprocess.TimedProc
- '''
- class _Process(object):
- '''
- Used to provide a dummy "process" attribute
- '''
- def __init__(self, returncode=0, pid=12345):
- self.returncode = returncode
- self.pid = pid
-
- def __init__(self, stdout=None, stderr=None, returncode=0, pid=12345):
- if stdout is not None and not isinstance(stdout, bytes):
- raise TypeError('Must pass stdout to MockTimedProc as bytes')
- if stderr is not None and not isinstance(stderr, bytes):
- raise TypeError('Must pass stderr to MockTimedProc as bytes')
- self._stdout = stdout
- self._stderr = stderr
- self.process = self._Process(returncode=returncode, pid=pid)
-
- def run(self):
- pass
-
- @property
- def stdout(self):
- return self._stdout
-
- @property
- def stderr(self):
- return self._stderr
-
-
+@skipIf(NO_MOCK, NO_MOCK_REASON)
class CMDMODTestCase(TestCase, LoaderModuleMockMixin):
'''
Unit tests for the salt.modules.cmdmod module
--
2.16.4
++++++ do-not-report-patches-as-installed-when-not-all-the-.patch ++++++
>From 7e9adda8dfd53050756d0ac0cf64570b76ce7365 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 13 Mar 2019 16:14:07 +0000
Subject: [PATCH] Do not report patches as installed when not all the
related packages are installed (bsc#1128061)
Co-authored-by: Mihai Dinca <mdinca(a)suse.de>
---
salt/modules/yumpkg.py | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/salt/modules/yumpkg.py b/salt/modules/yumpkg.py
index b1257d0de0..3ddf989511 100644
--- a/salt/modules/yumpkg.py
+++ b/salt/modules/yumpkg.py
@@ -3220,7 +3220,11 @@ def _get_patches(installed_only=False):
for line in salt.utils.itertools.split(ret, os.linesep):
inst, advisory_id, sev, pkg = re.match(r'([i|\s]) ([^\s]+) +([^\s]+) +([^\s]+)',
line).groups()
+<<<<<<< HEAD
if advisory_id not in patches:
+=======
+ if not advisory_id in patches:
+>>>>>>> Do not report patches as installed when not all the related packages are installed (bsc#1128061)
patches[advisory_id] = {
'installed': True if inst == 'i' else False,
'summary': [pkg]
--
2.16.4
++++++ don-t-call-zypper-with-more-than-one-no-refresh.patch ++++++
>From c1f5e6332bf025394b81868bf1edc6ae44944a7c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat(a)suse.com>
Date: Tue, 29 Jan 2019 09:44:03 +0100
Subject: [PATCH] Don't call zypper with more than one --no-refresh
Now zypper started being picky and errors out when --no-refresh is
passed twice. Make sure we won't hit this.
---
salt/modules/zypperpkg.py | 2 +-
tests/unit/modules/test_zypperpkg.py | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index 04a6a6872d..37428cf67c 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -282,7 +282,7 @@ class _Zypper(object):
self.__called = True
if self.__xml:
self.__cmd.append('--xmlout')
- if not self.__refresh:
+ if not self.__refresh and '--no-refresh' not in args:
self.__cmd.append('--no-refresh')
self.__cmd.extend(args)
diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py
index b3162f10cd..956902eab3 100644
--- a/tests/unit/modules/test_zypperpkg.py
+++ b/tests/unit/modules/test_zypperpkg.py
@@ -135,7 +135,7 @@ class ZypperTestCase(TestCase, LoaderModuleMockMixin):
self.assertEqual(zypper.__zypper__.call('foo'), stdout_xml_snippet)
self.assertEqual(len(sniffer.calls), 1)
- zypper.__zypper__.call('bar')
+ zypper.__zypper__.call('--no-refresh', 'bar')
self.assertEqual(len(sniffer.calls), 2)
self.assertEqual(sniffer.calls[0]['args'][0], ['zypper', '--non-interactive', '--no-refresh', 'foo'])
self.assertEqual(sniffer.calls[1]['args'][0], ['zypper', '--non-interactive', '--no-refresh', 'bar'])
--
2.16.4
++++++ early-feature-support-config.patch ++++++
++++ 1996 lines (skipped)
++++++ enable-passing-a-unix_socket-for-mysql-returners-bsc.patch ++++++
>From cc3bd759bc0e4cc3414ccc5a2928c593fa2eee04 Mon Sep 17 00:00:00 2001
From: Maximilian Meister <mmeister(a)suse.de>
Date: Thu, 3 May 2018 15:52:23 +0200
Subject: [PATCH] enable passing a unix_socket for mysql returners
(bsc#1091371)
quick fix for:
https://bugzilla.suse.com/show_bug.cgi?id=1091371
the upstream patch will go through some bigger refactoring of
the mysql drivers to be cleaner
this patch should only be temporary and can be dropped again once
the refactor is done upstream
Signed-off-by: Maximilian Meister <mmeister(a)suse.de>
---
salt/returners/mysql.py | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/salt/returners/mysql.py b/salt/returners/mysql.py
index 69599ec36a..ff9d380843 100644
--- a/salt/returners/mysql.py
+++ b/salt/returners/mysql.py
@@ -18,6 +18,7 @@ config. These are the defaults:
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
+ mysql.unix_socket: '/tmp/mysql.sock'
SSL is optional. The defaults are set to None. If you do not want to use SSL,
either exclude these options or set them to None.
@@ -43,6 +44,7 @@ optional. The following ssl options are simply for illustration purposes:
alternative.mysql.ssl_ca: '/etc/pki/mysql/certs/localhost.pem'
alternative.mysql.ssl_cert: '/etc/pki/mysql/certs/localhost.crt'
alternative.mysql.ssl_key: '/etc/pki/mysql/certs/localhost.key'
+ alternative.mysql.unix_socket: '/tmp/mysql.sock'
Should you wish the returner data to be cleaned out every so often, set
`keep_jobs` to the number of hours for the jobs to live in the tables.
@@ -198,7 +200,8 @@ def _get_options(ret=None):
'port': 3306,
'ssl_ca': None,
'ssl_cert': None,
- 'ssl_key': None}
+ 'ssl_key': None,
+ 'unix_socket': '/tmp/mysql.sock'}
attrs = {'host': 'host',
'user': 'user',
@@ -207,7 +210,8 @@ def _get_options(ret=None):
'port': 'port',
'ssl_ca': 'ssl_ca',
'ssl_cert': 'ssl_cert',
- 'ssl_key': 'ssl_key'}
+ 'ssl_key': 'ssl_key',
+ 'unix_socket': 'unix_socket'}
_options = salt.returners.get_returner_options(__virtualname__,
ret,
@@ -261,7 +265,8 @@ def _get_serv(ret=None, commit=False):
passwd=_options.get('pass'),
db=_options.get('db'),
port=_options.get('port'),
- ssl=ssl_options)
+ ssl=ssl_options,
+ unix_socket=_options.get('unix_socket'))
try:
__context__['mysql_returner_conn'] = conn
--
2.16.4
++++++ fall-back-to-pymysql.patch ++++++
>From f0098b4b9e5abaaca7bbc6c17f5a60bb2129dda5 Mon Sep 17 00:00:00 2001
From: Maximilian Meister <mmeister(a)suse.de>
Date: Thu, 5 Apr 2018 13:23:23 +0200
Subject: [PATCH] fall back to PyMySQL
same is already done in modules (see #26803)
Signed-off-by: Maximilian Meister <mmeister(a)suse.de>
---
salt/modules/mysql.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/salt/modules/mysql.py b/salt/modules/mysql.py
index 87e2361e28..e785e5219c 100644
--- a/salt/modules/mysql.py
+++ b/salt/modules/mysql.py
@@ -58,7 +58,7 @@ try:
import MySQLdb.cursors
import MySQLdb.converters
from MySQLdb.constants import FIELD_TYPE, FLAG
- from MySQLdb import OperationalError
+ from MySQLdb.connections import OperationalError
except ImportError:
try:
# MySQLdb import failed, try to import PyMySQL
@@ -68,7 +68,7 @@ except ImportError:
import MySQLdb.cursors
import MySQLdb.converters
from MySQLdb.constants import FIELD_TYPE, FLAG
- from MySQLdb import OperationalError
+ from MySQLdb.err import OperationalError
except ImportError:
MySQLdb = None
--
2.16.4
++++++ fix-__mount_device-wrapper-253.patch ++++++
>From 5a8ce2c85637db8bbd2de11413f638fbed6dcd3c Mon Sep 17 00:00:00 2001
From: Alberto Planas <aplanas(a)suse.com>
Date: Wed, 29 Jul 2020 16:12:21 +0200
Subject: [PATCH] Fix __mount_device wrapper (#253)
Some recent change in Salt is now doing the right thing, and calling the
different states with separated args and kwargs. This change trigger a
hidden bug in the __mount_device decorator, that expect those parameter
to be in kwargs, as is happening during the test.
This patch change the way that the wrapper inside the decorator search
for the name and device parameters, first looking into kwargs and later
in args if possible. A new test is introduced to exercise both cases.
Fix #58012
(cherry picked from commit 2089645e2478751dc795127cfd14d0385c2e0899)
---
changelog/58012.fixed | 1 +
salt/states/btrfs.py | 6 +++---
tests/unit/states/test_btrfs.py | 27 +++++++++++++++++++++++++++
3 files changed, 31 insertions(+), 3 deletions(-)
create mode 100644 changelog/58012.fixed
diff --git a/changelog/58012.fixed b/changelog/58012.fixed
new file mode 100644
index 0000000000..13a1ef747d
--- /dev/null
+++ b/changelog/58012.fixed
@@ -0,0 +1 @@
+Fix btrfs state decorator, that produces exceptions when creating subvolumes.
\ No newline at end of file
diff --git a/salt/states/btrfs.py b/salt/states/btrfs.py
index af78c8ae00..d0d6095c46 100644
--- a/salt/states/btrfs.py
+++ b/salt/states/btrfs.py
@@ -103,9 +103,9 @@ def __mount_device(action):
'''
@functools.wraps(action)
def wrapper(*args, **kwargs):
- name = kwargs['name']
- device = kwargs['device']
- use_default = kwargs.get('use_default', False)
+ name = kwargs.get("name", args[0] if args else None)
+ device = kwargs.get("device", args[1] if len(args) > 1 else None)
+ use_default = kwargs.get("use_default", False)
ret = {
'name': name,
diff --git a/tests/unit/states/test_btrfs.py b/tests/unit/states/test_btrfs.py
index c68f6279dc..c722630aef 100644
--- a/tests/unit/states/test_btrfs.py
+++ b/tests/unit/states/test_btrfs.py
@@ -245,6 +245,33 @@ class BtrfsTestCase(TestCase, LoaderModuleMockMixin):
mount.assert_called_once()
umount.assert_called_once()
+ @skipIf(salt.utils.platform.is_windows(), "Skip on Windows")
+ @patch("salt.states.btrfs._umount")
+ @patch("salt.states.btrfs._mount")
+ def test_subvolume_created_exists_decorator(self, mount, umount):
+ """
+ Test creating a subvolume using a non-kwargs call
+ """
+ mount.return_value = "/tmp/xxx"
+ salt_mock = {
+ "btrfs.subvolume_exists": MagicMock(return_value=True),
+ }
+ opts_mock = {
+ "test": False,
+ }
+ with patch.dict(btrfs.__salt__, salt_mock), patch.dict(
+ btrfs.__opts__, opts_mock
+ ):
+ assert btrfs.subvolume_created("@/var", "/dev/sda1") == {
+ "name": "@/var",
+ "result": True,
+ "changes": {},
+ "comment": ["Subvolume @/var already present"],
+ }
+ salt_mock["btrfs.subvolume_exists"].assert_called_with("/tmp/xxx/@/var")
+ mount.assert_called_once()
+ umount.assert_called_once()
+
@patch('salt.states.btrfs._umount')
@patch('salt.states.btrfs._mount')
def test_subvolume_created_exists_test(self, mount, umount):
--
2.27.0
++++++ fix-a-test-and-some-variable-names-229.patch ++++++
>From e6c6cedbedb84ac4da78bc593128aeca5fc8542a Mon Sep 17 00:00:00 2001
From: Alberto Planas <aplanas(a)suse.com>
Date: Tue, 12 May 2020 14:16:23 +0200
Subject: [PATCH] Fix a test and some variable names (#229)
* loop: fix variable names for until_no_eval
* Fix test_core tests for fqdns errors
---
salt/modules/network.py | 2 +-
tests/unit/grains/test_core.py | 24 +++++++++++++-----------
2 files changed, 14 insertions(+), 12 deletions(-)
diff --git a/salt/modules/network.py b/salt/modules/network.py
index 880f4f8d5f..9e11eb816e 100644
--- a/salt/modules/network.py
+++ b/salt/modules/network.py
@@ -1946,4 +1946,4 @@ def fqdns():
elapsed = time.time() - start
log.debug('Elapsed time getting FQDNs: {} seconds'.format(elapsed))
- return {"fqdns": sorted(list(fqdns))}
\ No newline at end of file
+ return {"fqdns": sorted(list(fqdns))}
diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py
index c276dee9f3..12adff3b59 100644
--- a/tests/unit/grains/test_core.py
+++ b/tests/unit/grains/test_core.py
@@ -1122,20 +1122,22 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
for errno in (0, core.HOST_NOT_FOUND, core.NO_DATA):
mock_log = MagicMock()
+ with patch.dict(core.__salt__, {'network.fqdns': salt.modules.network.fqdns}):
+ with patch.object(socket, 'gethostbyaddr',
+ side_effect=_gen_gethostbyaddr(errno)):
+ with patch('salt.modules.network.log', mock_log):
+ self.assertEqual(core.fqdns(), {'fqdns': []})
+ mock_log.debug.assert_called()
+ mock_log.error.assert_not_called()
+
+ mock_log = MagicMock()
+ with patch.dict(core.__salt__, {'network.fqdns': salt.modules.network.fqdns}):
with patch.object(socket, 'gethostbyaddr',
- side_effect=_gen_gethostbyaddr(errno)):
- with patch('salt.grains.core.log', mock_log):
+ side_effect=_gen_gethostbyaddr(-1)):
+ with patch('salt.modules.network.log', mock_log):
self.assertEqual(core.fqdns(), {'fqdns': []})
mock_log.debug.assert_called_once()
- mock_log.error.assert_not_called()
-
- mock_log = MagicMock()
- with patch.object(socket, 'gethostbyaddr',
- side_effect=_gen_gethostbyaddr(-1)):
- with patch('salt.grains.core.log', mock_log):
- self.assertEqual(core.fqdns(), {'fqdns': []})
- mock_log.debug.assert_not_called()
- mock_log.error.assert_called_once()
+ mock_log.error.assert_called_once()
@patch.object(salt.utils.platform, 'is_windows', MagicMock(return_value=False))
@patch('salt.utils.network.ip_addrs', MagicMock(return_value=['1.2.3.4', '5.6.7.8']))
--
2.26.2
++++++ fix-a-wrong-rebase-in-test_core.py-180.patch ++++++
>From 6418d9ebc3b269a0246262f79c0bab367e39cc52 Mon Sep 17 00:00:00 2001
From: Alberto Planas <aplanas(a)gmail.com>
Date: Fri, 25 Oct 2019 15:43:16 +0200
Subject: [PATCH] Fix a wrong rebase in test_core.py (#180)
* core: ignore wrong product_name files
Some firmwares (like some NUC machines), do not provide valid
/sys/class/dmi/id/product_name strings. In those cases an
UnicodeDecodeError exception happens.
This patch ignore this kind of issue during the grains creation.
(cherry picked from commit 27b001bd5408359aa5dd219bfd900095ed592fe8)
* core: remove duplicate dead code
(cherry picked from commit bd0213bae00b737b24795bec3c030ebfe476e0d8)
---
salt/grains/core.py | 4 ++--
tests/unit/grains/test_core.py | 45 ------------------------------------------
2 files changed, 2 insertions(+), 47 deletions(-)
diff --git a/salt/grains/core.py b/salt/grains/core.py
index 68c43482d3..20950988d9 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -1000,7 +1000,7 @@ def _virtual(osdata):
except UnicodeDecodeError:
# Some firmwares provide non-valid 'product_name'
# files, ignore them
- pass
+ log.debug('The content in /sys/devices/virtual/dmi/id/product_name is not valid')
except IOError:
pass
elif osdata['kernel'] == 'FreeBSD':
@@ -2568,7 +2568,7 @@ def _hw_data(osdata):
except UnicodeDecodeError:
# Some firmwares provide non-valid 'product_name'
# files, ignore them
- pass
+ log.debug('The content in /sys/devices/virtual/dmi/id/product_name is not valid')
except (IOError, OSError) as err:
# PermissionError is new to Python 3, but corresponds to the EACESS and
# EPERM error numbers. Use those instead here for PY2 compatibility.
diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py
index c4731f667a..b4ed9379e5 100644
--- a/tests/unit/grains/test_core.py
+++ b/tests/unit/grains/test_core.py
@@ -1544,51 +1544,6 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
self.assertIn('osfullname', os_grains)
self.assertEqual(os_grains.get('osfullname'), 'FreeBSD')
- @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
- def test_kernelparams_return(self):
- expectations = [
- ('BOOT_IMAGE=/vmlinuz-3.10.0-693.2.2.el7.x86_64',
- {'kernelparams': [('BOOT_IMAGE', '/vmlinuz-3.10.0-693.2.2.el7.x86_64')]}),
- ('root=/dev/mapper/centos_daemon-root',
- {'kernelparams': [('root', '/dev/mapper/centos_daemon-root')]}),
- ('rhgb quiet ro',
- {'kernelparams': [('rhgb', None), ('quiet', None), ('ro', None)]}),
- ('param="value1"',
- {'kernelparams': [('param', 'value1')]}),
- ('param="value1 value2 value3"',
- {'kernelparams': [('param', 'value1 value2 value3')]}),
- ('param="value1 value2 value3" LANG="pl" ro',
- {'kernelparams': [('param', 'value1 value2 value3'), ('LANG', 'pl'), ('ro', None)]}),
- ('ipv6.disable=1',
- {'kernelparams': [('ipv6.disable', '1')]}),
- ('param="value1:value2:value3"',
- {'kernelparams': [('param', 'value1:value2:value3')]}),
- ('param="value1,value2,value3"',
- {'kernelparams': [('param', 'value1,value2,value3')]}),
- ('param="value1" param="value2" param="value3"',
- {'kernelparams': [('param', 'value1'), ('param', 'value2'), ('param', 'value3')]}),
- ]
-
- for cmdline, expectation in expectations:
- with patch('salt.utils.files.fopen', mock_open(read_data=cmdline)):
- self.assertEqual(core.kernelparams(), expectation)
-
- @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
- @patch('os.path.exists')
- @patch('salt.utils.platform.is_proxy')
- def test__hw_data_linux_empty(self, is_proxy, exists):
- is_proxy.return_value = False
- exists.return_value = True
- with patch('salt.utils.files.fopen', mock_open(read_data='')):
- self.assertEqual(core._hw_data({'kernel': 'Linux'}), {
- 'biosreleasedate': '',
- 'biosversion': '',
- 'manufacturer': '',
- 'productname': '',
- 'serialnumber': '',
- 'uuid': ''
- })
-
@skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
@skipIf(six.PY2, 'UnicodeDecodeError is throw in Python 3')
@patch('os.path.exists')
--
2.16.4
++++++ fix-aptpkg-systemd-call-bsc-1143301.patch ++++++
>From c2989e749f04aa8477130df649e550f5349a9a1f Mon Sep 17 00:00:00 2001
From: Mihai Dinca <mdinca(a)suse.de>
Date: Wed, 31 Jul 2019 15:29:03 +0200
Subject: [PATCH] Fix aptpkg systemd call (bsc#1143301)
---
salt/modules/aptpkg.py | 2 +-
tests/unit/modules/test_aptpkg.py | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py
index 13484c96bc..a5b039fc79 100644
--- a/salt/modules/aptpkg.py
+++ b/salt/modules/aptpkg.py
@@ -168,7 +168,7 @@ def _call_apt(args, scope=True, **kwargs):
'''
cmd = []
if scope and salt.utils.systemd.has_scope(__context__) and __salt__['config.get']('systemd.scope', True):
- cmd.extend(['systemd-run', '--scope'])
+ cmd.extend(['systemd-run', '--scope', '--description "{0}"'.format(__name__)])
cmd.extend(args)
params = {'output_loglevel': 'trace',
diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py
index 10e960f090..88eed062c4 100644
--- a/tests/unit/modules/test_aptpkg.py
+++ b/tests/unit/modules/test_aptpkg.py
@@ -645,7 +645,7 @@ class AptUtilsTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(aptpkg.__salt__, {'cmd.run_all': MagicMock(), 'config.get': MagicMock(return_value=True)}):
aptpkg._call_apt(['apt-get', 'purge', 'vim']) # pylint: disable=W0106
aptpkg.__salt__['cmd.run_all'].assert_called_once_with(
- ['systemd-run', '--scope', 'apt-get', 'purge', 'vim'], env={},
+ ['systemd-run', '--scope', '--description "salt.modules.aptpkg"', 'apt-get', 'purge', 'vim'], env={},
output_loglevel='trace', python_shell=False)
def test_call_apt_with_kwargs(self):
--
2.16.4
++++++ fix-async-batch-multiple-done-events.patch ++++++
>From 42d7e1de2c69d82447e73eab483e5d3c299d55f7 Mon Sep 17 00:00:00 2001
From: Mihai Dinca <mdinca(a)suse.de>
Date: Tue, 7 May 2019 12:24:35 +0200
Subject: [PATCH] Fix async-batch multiple done events
---
salt/cli/batch_async.py | 17 ++++++++++++-----
tests/unit/cli/test_batch_async.py | 20 +++++++++++++-------
2 files changed, 25 insertions(+), 12 deletions(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index 9c20b2fc6e..8c8f481e34 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -84,6 +84,7 @@ class BatchAsync(object):
listen=True,
io_loop=ioloop,
keep_loop=True)
+ self.scheduled = False
def __set_event_handler(self):
ping_return_pattern = 'salt/job/{0}/ret/*'.format(self.ping_jid)
@@ -116,8 +117,7 @@ class BatchAsync(object):
if minion in self.active:
self.active.remove(minion)
self.done_minions.add(minion)
- # call later so that we maybe gather more returns
- self.event.io_loop.call_later(self.batch_delay, self.schedule_next)
+ self.schedule_next()
def _get_next(self):
to_run = self.minions.difference(
@@ -137,7 +137,7 @@ class BatchAsync(object):
self.active = self.active.difference(self.timedout_minions)
running = batch_minions.difference(self.done_minions).difference(self.timedout_minions)
if timedout_minions:
- self.event.io_loop.call_later(self.batch_delay, self.schedule_next)
+ self.schedule_next()
if running:
self.event.io_loop.add_callback(self.find_job, running)
@@ -189,7 +189,7 @@ class BatchAsync(object):
"metadata": self.metadata
}
self.event.fire_event(data, "salt/batch/{0}/start".format(self.batch_jid))
- yield self.schedule_next()
+ yield self.run_next()
def end_batch(self):
left = self.minions.symmetric_difference(self.done_minions.union(self.timedout_minions))
@@ -204,8 +204,14 @@ class BatchAsync(object):
self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid))
self.event.remove_event_handler(self.__event_handler)
- @tornado.gen.coroutine
def schedule_next(self):
+ if not self.scheduled:
+ self.scheduled = True
+ # call later so that we maybe gather more returns
+ self.event.io_loop.call_later(self.batch_delay, self.run_next)
+
+ @tornado.gen.coroutine
+ def run_next(self):
next_batch = self._get_next()
if next_batch:
self.active = self.active.union(next_batch)
@@ -225,3 +231,4 @@ class BatchAsync(object):
self.active = self.active.difference(next_batch)
else:
self.end_batch()
+ self.scheduled = False
diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py
index d519157d92..441f9c58b9 100644
--- a/tests/unit/cli/test_batch_async.py
+++ b/tests/unit/cli/test_batch_async.py
@@ -111,14 +111,14 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
@tornado.testing.gen_test
def test_start_batch_calls_next(self):
- self.batch.schedule_next = MagicMock(return_value=MagicMock())
+ self.batch.run_next = MagicMock(return_value=MagicMock())
self.batch.event = MagicMock()
future = tornado.gen.Future()
future.set_result(None)
- self.batch.schedule_next = MagicMock(return_value=future)
+ self.batch.run_next = MagicMock(return_value=future)
self.batch.start_batch()
self.assertEqual(self.batch.initialized, True)
- self.assertEqual(len(self.batch.schedule_next.mock_calls), 1)
+ self.assertEqual(len(self.batch.run_next.mock_calls), 1)
def test_batch_fire_done_event(self):
self.batch.targeted_minions = {'foo', 'baz', 'bar'}
@@ -154,7 +154,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
future = tornado.gen.Future()
future.set_result({'minions': ['foo', 'bar']})
self.batch.local.run_job_async.return_value = future
- ret = self.batch.schedule_next().result()
+ ret = self.batch.run_next().result()
self.assertEqual(
self.batch.local.run_job_async.call_args[0],
({'foo', 'bar'}, 'my.fun', [], 'list')
@@ -253,7 +253,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.assertEqual(self.batch.done_minions, {'foo'})
self.assertEqual(
self.batch.event.io_loop.call_later.call_args[0],
- (self.batch.batch_delay, self.batch.schedule_next))
+ (self.batch.batch_delay, self.batch.run_next))
def test_batch__event_handler_find_job_return(self):
self.batch.event = MagicMock(
@@ -263,10 +263,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.assertEqual(self.batch.find_job_returned, {'foo'})
@tornado.testing.gen_test
- def test_batch_schedule_next_end_batch_when_no_next(self):
+ def test_batch_run_next_end_batch_when_no_next(self):
self.batch.end_batch = MagicMock()
self.batch._get_next = MagicMock(return_value={})
- self.batch.schedule_next()
+ self.batch.run_next()
self.assertEqual(len(self.batch.end_batch.mock_calls), 1)
@tornado.testing.gen_test
@@ -342,3 +342,9 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.batch.event.io_loop.add_callback.call_args[0],
(self.batch.find_job, {'foo'})
)
+
+ def test_only_on_run_next_is_scheduled(self):
+ self.batch.event = MagicMock()
+ self.batch.scheduled = True
+ self.batch.schedule_next()
+ self.assertEqual(len(self.batch.event.io_loop.call_later.mock_calls), 0)
--
2.16.4
++++++ fix-async-batch-race-conditions.patch ++++++
>From dc001cb47fd88a8e8a1bd82a1457325822d1220b Mon Sep 17 00:00:00 2001
From: Mihai Dinca <mdinca(a)suse.de>
Date: Thu, 11 Apr 2019 15:57:59 +0200
Subject: [PATCH] Fix async batch race conditions
Close batching when there is no next batch
---
salt/cli/batch_async.py | 80 +++++++++++++++++++-------------------
tests/unit/cli/test_batch_async.py | 35 +++++++----------
2 files changed, 54 insertions(+), 61 deletions(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index 3160d46d8b..9c20b2fc6e 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -37,14 +37,14 @@ class BatchAsync(object):
- tag: salt/batch/<batch-jid>/start
- data: {
"available_minions": self.minions,
- "down_minions": self.down_minions
+ "down_minions": targeted_minions - presence_ping_minions
}
When the batch ends, an `done` event is fired:
- tag: salt/batch/<batch-jid>/done
- data: {
"available_minions": self.minions,
- "down_minions": self.down_minions,
+ "down_minions": targeted_minions - presence_ping_minions
"done_minions": self.done_minions,
"timedout_minions": self.timedout_minions
}
@@ -67,7 +67,7 @@ class BatchAsync(object):
self.eauth = batch_get_eauth(clear_load['kwargs'])
self.metadata = clear_load['kwargs'].get('metadata', {})
self.minions = set()
- self.down_minions = set()
+ self.targeted_minions = set()
self.timedout_minions = set()
self.done_minions = set()
self.active = set()
@@ -108,8 +108,7 @@ class BatchAsync(object):
minion = data['id']
if op == 'ping_return':
self.minions.add(minion)
- self.down_minions.remove(minion)
- if not self.down_minions:
+ if self.targeted_minions == self.minions:
self.event.io_loop.spawn_callback(self.start_batch)
elif op == 'find_job_return':
self.find_job_returned.add(minion)
@@ -120,9 +119,6 @@ class BatchAsync(object):
# call later so that we maybe gather more returns
self.event.io_loop.call_later(self.batch_delay, self.schedule_next)
- if self.initialized and self.done_minions == self.minions.difference(self.timedout_minions):
- self.end_batch()
-
def _get_next(self):
to_run = self.minions.difference(
self.done_minions).difference(
@@ -135,16 +131,13 @@ class BatchAsync(object):
return set(list(to_run)[:next_batch_size])
@tornado.gen.coroutine
- def check_find_job(self, minions):
- did_not_return = minions.difference(self.find_job_returned)
- if did_not_return:
- for minion in did_not_return:
- if minion in self.find_job_returned:
- self.find_job_returned.remove(minion)
- if minion in self.active:
- self.active.remove(minion)
- self.timedout_minions.add(minion)
- running = minions.difference(did_not_return).difference(self.done_minions).difference(self.timedout_minions)
+ def check_find_job(self, batch_minions):
+ timedout_minions = batch_minions.difference(self.find_job_returned).difference(self.done_minions)
+ self.timedout_minions = self.timedout_minions.union(timedout_minions)
+ self.active = self.active.difference(self.timedout_minions)
+ running = batch_minions.difference(self.done_minions).difference(self.timedout_minions)
+ if timedout_minions:
+ self.event.io_loop.call_later(self.batch_delay, self.schedule_next)
if running:
self.event.io_loop.add_callback(self.find_job, running)
@@ -183,7 +176,7 @@ class BatchAsync(object):
jid=self.ping_jid,
metadata=self.metadata,
**self.eauth)
- self.down_minions = set(ping_return['minions'])
+ self.targeted_minions = set(ping_return['minions'])
@tornado.gen.coroutine
def start_batch(self):
@@ -192,36 +185,43 @@ class BatchAsync(object):
self.initialized = True
data = {
"available_minions": self.minions,
- "down_minions": self.down_minions,
+ "down_minions": self.targeted_minions.difference(self.minions),
"metadata": self.metadata
}
self.event.fire_event(data, "salt/batch/{0}/start".format(self.batch_jid))
yield self.schedule_next()
def end_batch(self):
- data = {
- "available_minions": self.minions,
- "down_minions": self.down_minions,
- "done_minions": self.done_minions,
- "timedout_minions": self.timedout_minions,
- "metadata": self.metadata
- }
- self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid))
- self.event.remove_event_handler(self.__event_handler)
+ left = self.minions.symmetric_difference(self.done_minions.union(self.timedout_minions))
+ if not left:
+ data = {
+ "available_minions": self.minions,
+ "down_minions": self.targeted_minions.difference(self.minions),
+ "done_minions": self.done_minions,
+ "timedout_minions": self.timedout_minions,
+ "metadata": self.metadata
+ }
+ self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid))
+ self.event.remove_event_handler(self.__event_handler)
@tornado.gen.coroutine
def schedule_next(self):
next_batch = self._get_next()
if next_batch:
- yield self.local.run_job_async(
- next_batch,
- self.opts['fun'],
- self.opts['arg'],
- 'list',
- raw=self.opts.get('raw', False),
- ret=self.opts.get('return', ''),
- gather_job_timeout=self.opts['gather_job_timeout'],
- jid=self.batch_jid,
- metadata=self.metadata)
- self.event.io_loop.call_later(self.opts['timeout'], self.find_job, set(next_batch))
self.active = self.active.union(next_batch)
+ try:
+ yield self.local.run_job_async(
+ next_batch,
+ self.opts['fun'],
+ self.opts['arg'],
+ 'list',
+ raw=self.opts.get('raw', False),
+ ret=self.opts.get('return', ''),
+ gather_job_timeout=self.opts['gather_job_timeout'],
+ jid=self.batch_jid,
+ metadata=self.metadata)
+ self.event.io_loop.call_later(self.opts['timeout'], self.find_job, set(next_batch))
+ except Exception as ex:
+ self.active = self.active.difference(next_batch)
+ else:
+ self.end_batch()
diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py
index f65b6a06c3..d519157d92 100644
--- a/tests/unit/cli/test_batch_async.py
+++ b/tests/unit/cli/test_batch_async.py
@@ -75,8 +75,8 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.batch.local.run_job_async.call_args[0],
('*', 'test.ping', [], 'glob')
)
- # assert down_minions == all minions matched by tgt
- self.assertEqual(self.batch.down_minions, set(['foo', 'bar']))
+ # assert targeted_minions == all minions matched by tgt
+ self.assertEqual(self.batch.targeted_minions, set(['foo', 'bar']))
@tornado.testing.gen_test
def test_batch_start_on_gather_job_timeout(self):
@@ -121,7 +121,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.assertEqual(len(self.batch.schedule_next.mock_calls), 1)
def test_batch_fire_done_event(self):
+ self.batch.targeted_minions = {'foo', 'baz', 'bar'}
self.batch.minions = set(['foo', 'bar'])
+ self.batch.done_minions = {'foo'}
+ self.batch.timedout_minions = {'bar'}
self.batch.event = MagicMock()
self.batch.metadata = {'mykey': 'myvalue'}
self.batch.end_batch()
@@ -130,9 +133,9 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
(
{
'available_minions': set(['foo', 'bar']),
- 'done_minions': set(),
- 'down_minions': set(),
- 'timedout_minions': set(),
+ 'done_minions': self.batch.done_minions,
+ 'down_minions': {'baz'},
+ 'timedout_minions': self.batch.timedout_minions,
'metadata': self.batch.metadata
},
"salt/batch/1235/done"
@@ -212,7 +215,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.assertEqual(self.batch._get_next(), set())
def test_batch__event_handler_ping_return(self):
- self.batch.down_minions = {'foo'}
+ self.batch.targeted_minions = {'foo'}
self.batch.event = MagicMock(
unpack=MagicMock(return_value=('salt/job/1234/ret/foo', {'id': 'foo'})))
self.batch.start()
@@ -222,7 +225,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.assertEqual(self.batch.done_minions, set())
def test_batch__event_handler_call_start_batch_when_all_pings_return(self):
- self.batch.down_minions = {'foo'}
+ self.batch.targeted_minions = {'foo'}
self.batch.event = MagicMock(
unpack=MagicMock(return_value=('salt/job/1234/ret/foo', {'id': 'foo'})))
self.batch.start()
@@ -232,7 +235,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
(self.batch.start_batch,))
def test_batch__event_handler_not_call_start_batch_when_not_all_pings_return(self):
- self.batch.down_minions = {'foo', 'bar'}
+ self.batch.targeted_minions = {'foo', 'bar'}
self.batch.event = MagicMock(
unpack=MagicMock(return_value=('salt/job/1234/ret/foo', {'id': 'foo'})))
self.batch.start()
@@ -260,20 +263,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.assertEqual(self.batch.find_job_returned, {'foo'})
@tornado.testing.gen_test
- def test_batch__event_handler_end_batch(self):
- self.batch.event = MagicMock(
- unpack=MagicMock(return_value=('salt/job/not-my-jid/ret/foo', {'id': 'foo'})))
- future = tornado.gen.Future()
- future.set_result({'minions': ['foo', 'bar', 'baz']})
- self.batch.local.run_job_async.return_value = future
- self.batch.start()
- self.batch.initialized = True
- self.assertEqual(self.batch.down_minions, {'foo', 'bar', 'baz'})
+ def test_batch_schedule_next_end_batch_when_no_next(self):
self.batch.end_batch = MagicMock()
- self.batch.minions = {'foo', 'bar', 'baz'}
- self.batch.done_minions = {'foo', 'bar'}
- self.batch.timedout_minions = {'baz'}
- self.batch._BatchAsync__event_handler(MagicMock())
+ self.batch._get_next = MagicMock(return_value={})
+ self.batch.schedule_next()
self.assertEqual(len(self.batch.end_batch.mock_calls), 1)
@tornado.testing.gen_test
--
2.16.4
++++++ fix-batch_async-obsolete-test.patch ++++++
>From 49780d409630fe18293a077e767aabfd183ff823 Mon Sep 17 00:00:00 2001
From: Mihai Dinca <mdinca(a)suse.de>
Date: Tue, 3 Dec 2019 11:22:42 +0100
Subject: [PATCH] Fix batch_async obsolete test
---
tests/unit/cli/test_batch_async.py | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py
index 12dfe543bc..f1d36a81fb 100644
--- a/tests/unit/cli/test_batch_async.py
+++ b/tests/unit/cli/test_batch_async.py
@@ -140,8 +140,14 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
"salt/batch/1235/done"
)
)
+
+ def test_batch__del__(self):
+ batch = BatchAsync(MagicMock(), MagicMock(), MagicMock())
+ event = MagicMock()
+ batch.event = event
+ batch.__del__()
self.assertEqual(
- len(self.batch.event.remove_event_handler.mock_calls), 1)
+ len(event.remove_event_handler.mock_calls), 1)
@tornado.testing.gen_test
def test_batch_next(self):
--
2.16.4
++++++ fix-bsc-1065792.patch ++++++
>From 4acbe70851e3ef7a04fc5ad0dc9a2519f6989c66 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Thu, 14 Dec 2017 16:21:40 +0100
Subject: [PATCH] Fix bsc#1065792
---
salt/states/service.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/salt/states/service.py b/salt/states/service.py
index de7718ea49..987e37cd42 100644
--- a/salt/states/service.py
+++ b/salt/states/service.py
@@ -80,6 +80,7 @@ def __virtual__():
Only make these states available if a service provider has been detected or
assigned for this minion
'''
+ __salt__._load_all()
if 'service.start' in __salt__:
return __virtualname__
else:
--
2.16.4
++++++ fix-cve-2020-11651-and-fix-cve-2020-11652.patch ++++++
++++ 766 lines (skipped)
++++++ fix-failing-unit-tests-for-batch-async.patch ++++++
>From e6f6b38c75027c4c4f6395117b734dce6fb7433e Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Fri, 4 Oct 2019 15:00:50 +0100
Subject: [PATCH] Fix failing unit tests for batch async
---
salt/cli/batch_async.py | 2 +-
tests/unit/cli/test_batch_async.py | 57 ++++++++++++++++++++++----------------
2 files changed, 34 insertions(+), 25 deletions(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index f9e736f804..6d0dca1da5 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -88,7 +88,7 @@ class BatchAsync(object):
io_loop=ioloop,
keep_loop=True)
self.scheduled = False
- self.patterns = {}
+ self.patterns = set()
def __set_event_handler(self):
ping_return_pattern = 'salt/job/{0}/ret/*'.format(self.ping_jid)
diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py
index 441f9c58b9..12dfe543bc 100644
--- a/tests/unit/cli/test_batch_async.py
+++ b/tests/unit/cli/test_batch_async.py
@@ -68,8 +68,8 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
ret = self.batch.start()
# assert start_batch is called later with batch_presence_ping_timeout as param
self.assertEqual(
- self.batch.event.io_loop.call_later.call_args[0],
- (self.batch.batch_presence_ping_timeout, self.batch.start_batch))
+ self.batch.event.io_loop.spawn_callback.call_args[0],
+ (self.batch.start_batch,))
# assert test.ping called
self.assertEqual(
self.batch.local.run_job_async.call_args[0],
@@ -88,8 +88,8 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
ret = self.batch.start()
# assert start_batch is called later with gather_job_timeout as param
self.assertEqual(
- self.batch.event.io_loop.call_later.call_args[0],
- (self.batch.opts['gather_job_timeout'], self.batch.start_batch))
+ self.batch.event.io_loop.spawn_callback.call_args[0],
+ (self.batch.start_batch,))
def test_batch_fire_start_event(self):
self.batch.minions = set(['foo', 'bar'])
@@ -113,12 +113,11 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
def test_start_batch_calls_next(self):
self.batch.run_next = MagicMock(return_value=MagicMock())
self.batch.event = MagicMock()
- future = tornado.gen.Future()
- future.set_result(None)
- self.batch.run_next = MagicMock(return_value=future)
self.batch.start_batch()
self.assertEqual(self.batch.initialized, True)
- self.assertEqual(len(self.batch.run_next.mock_calls), 1)
+ self.assertEqual(
+ self.batch.event.io_loop.spawn_callback.call_args[0],
+ (self.batch.run_next,))
def test_batch_fire_done_event(self):
self.batch.targeted_minions = {'foo', 'baz', 'bar'}
@@ -154,14 +153,14 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
future = tornado.gen.Future()
future.set_result({'minions': ['foo', 'bar']})
self.batch.local.run_job_async.return_value = future
- ret = self.batch.run_next().result()
+ self.batch.run_next()
self.assertEqual(
self.batch.local.run_job_async.call_args[0],
({'foo', 'bar'}, 'my.fun', [], 'list')
)
self.assertEqual(
- self.batch.event.io_loop.call_later.call_args[0],
- (self.batch.opts['timeout'], self.batch.find_job, {'foo', 'bar'})
+ self.batch.event.io_loop.spawn_callback.call_args[0],
+ (self.batch.find_job, {'foo', 'bar'})
)
self.assertEqual(self.batch.active, {'bar', 'foo'})
@@ -252,13 +251,14 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.assertEqual(self.batch.active, set())
self.assertEqual(self.batch.done_minions, {'foo'})
self.assertEqual(
- self.batch.event.io_loop.call_later.call_args[0],
- (self.batch.batch_delay, self.batch.run_next))
+ self.batch.event.io_loop.spawn_callback.call_args[0],
+ (self.batch.schedule_next,))
def test_batch__event_handler_find_job_return(self):
self.batch.event = MagicMock(
- unpack=MagicMock(return_value=('salt/job/1236/ret/foo', {'id': 'foo'})))
+ unpack=MagicMock(return_value=('salt/job/1236/ret/foo', {'id': 'foo', 'return': 'deadbeaf'})))
self.batch.start()
+ self.batch.patterns.add(('salt/job/1236/ret/*', 'find_job_return'))
self.batch._BatchAsync__event_handler(MagicMock())
self.assertEqual(self.batch.find_job_returned, {'foo'})
@@ -275,10 +275,13 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
future = tornado.gen.Future()
future.set_result({})
self.batch.local.run_job_async.return_value = future
+ self.batch.minions = set(['foo', 'bar'])
+ self.batch.jid_gen = MagicMock(return_value="1234")
+ tornado.gen.sleep = MagicMock(return_value=future)
self.batch.find_job({'foo', 'bar'})
self.assertEqual(
- self.batch.event.io_loop.call_later.call_args[0],
- (self.batch.opts['gather_job_timeout'], self.batch.check_find_job, {'foo', 'bar'})
+ self.batch.event.io_loop.spawn_callback.call_args[0],
+ (self.batch.check_find_job, {'foo', 'bar'}, "1234")
)
@tornado.testing.gen_test
@@ -288,17 +291,21 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
future = tornado.gen.Future()
future.set_result({})
self.batch.local.run_job_async.return_value = future
+ self.batch.minions = set(['foo', 'bar'])
+ self.batch.jid_gen = MagicMock(return_value="1234")
+ tornado.gen.sleep = MagicMock(return_value=future)
self.batch.find_job({'foo', 'bar'})
self.assertEqual(
- self.batch.event.io_loop.call_later.call_args[0],
- (self.batch.opts['gather_job_timeout'], self.batch.check_find_job, {'foo'})
+ self.batch.event.io_loop.spawn_callback.call_args[0],
+ (self.batch.check_find_job, {'foo'}, "1234")
)
def test_batch_check_find_job_did_not_return(self):
self.batch.event = MagicMock()
self.batch.active = {'foo'}
self.batch.find_job_returned = set()
- self.batch.check_find_job({'foo'})
+ self.batch.patterns = { ('salt/job/1234/ret/*', 'find_job_return') }
+ self.batch.check_find_job({'foo'}, jid="1234")
self.assertEqual(self.batch.find_job_returned, set())
self.assertEqual(self.batch.active, set())
self.assertEqual(len(self.batch.event.io_loop.add_callback.mock_calls), 0)
@@ -306,9 +313,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
def test_batch_check_find_job_did_return(self):
self.batch.event = MagicMock()
self.batch.find_job_returned = {'foo'}
- self.batch.check_find_job({'foo'})
+ self.batch.patterns = { ('salt/job/1234/ret/*', 'find_job_return') }
+ self.batch.check_find_job({'foo'}, jid="1234")
self.assertEqual(
- self.batch.event.io_loop.add_callback.call_args[0],
+ self.batch.event.io_loop.spawn_callback.call_args[0],
(self.batch.find_job, {'foo'})
)
@@ -329,7 +337,8 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
# both not yet done but only 'foo' responded to find_job
not_done = {'foo', 'bar'}
- self.batch.check_find_job(not_done)
+ self.batch.patterns = { ('salt/job/1234/ret/*', 'find_job_return') }
+ self.batch.check_find_job(not_done, jid="1234")
# assert 'bar' removed from active
self.assertEqual(self.batch.active, {'foo'})
@@ -339,7 +348,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
# assert 'find_job' schedueled again only for 'foo'
self.assertEqual(
- self.batch.event.io_loop.add_callback.call_args[0],
+ self.batch.event.io_loop.spawn_callback.call_args[0],
(self.batch.find_job, {'foo'})
)
@@ -347,4 +356,4 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.batch.event = MagicMock()
self.batch.scheduled = True
self.batch.schedule_next()
- self.assertEqual(len(self.batch.event.io_loop.call_later.mock_calls), 0)
+ self.assertEqual(len(self.batch.event.io_loop.spawn_callback.mock_calls), 0)
--
2.16.4
++++++ fix-for-log-checking-in-x509-test.patch ++++++
>From e0ca0d0d2a62f18e2712223e130af5faa8e0fe05 Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Thu, 28 Nov 2019 15:23:36 +0100
Subject: [PATCH] Fix for log checking in x509 test
We are logging in debug and not in trace mode here.
---
tests/unit/modules/test_x509.py | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/tests/unit/modules/test_x509.py b/tests/unit/modules/test_x509.py
index 624a927bec..976af634c7 100644
--- a/tests/unit/modules/test_x509.py
+++ b/tests/unit/modules/test_x509.py
@@ -68,9 +68,9 @@ class X509TestCase(TestCase, LoaderModuleMockMixin):
subj = FakeSubject()
x509._parse_subject(subj)
- assert x509.log.trace.call_args[0][0] == "Missing attribute '%s'. Error: %s"
- assert x509.log.trace.call_args[0][1] == list(subj.nid.keys())[0]
- assert isinstance(x509.log.trace.call_args[0][2], TypeError)
+ assert x509.log.debug.call_args[0][0] == "Missing attribute '%s'. Error: %s"
+ assert x509.log.debug.call_args[0][1] == list(subj.nid.keys())[0]
+ assert isinstance(x509.log.debug.call_args[0][2], TypeError)
@skipIf(not HAS_M2CRYPTO, 'Skipping, M2Crypto is unavailble')
def test_get_pem_entry(self):
--
2.16.4
++++++ fix-for-return-value-ret-vs-return-in-batch-mode.patch ++++++
>From f8eeddc8461a66d34b11e3677729d733b3deb804 Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Thu, 9 Apr 2020 17:12:54 +0200
Subject: [PATCH] Fix for return value ret vs return in batch mode
The least intrusive fix for ret vs return in batch mode.
---
salt/cli/batch.py | 16 ++++++----
tests/unit/cli/test_batch.py | 62 ++++++++++++++++++++++++++++++++++++
2 files changed, 71 insertions(+), 7 deletions(-)
diff --git a/salt/cli/batch.py b/salt/cli/batch.py
index 10fc81a5f4..d5b8754ad7 100644
--- a/salt/cli/batch.py
+++ b/salt/cli/batch.py
@@ -234,14 +234,16 @@ class Batch(object):
if not self.quiet:
salt.utils.stringutils.print_cli('\nExecuting run on {0}\n'.format(sorted(next_)))
# create a new iterator for this batch of minions
+ return_value = self.opts.get("return", self.opts.get("ret", ""))
new_iter = self.local.cmd_iter_no_block(
- *args,
- raw=self.opts.get('raw', False),
- ret=self.opts.get('return', ''),
- show_jid=show_jid,
- verbose=show_verbose,
- gather_job_timeout=self.opts['gather_job_timeout'],
- **self.eauth)
+ *args,
+ raw=self.opts.get("raw", False),
+ ret=return_value,
+ show_jid=show_jid,
+ verbose=show_verbose,
+ gather_job_timeout=self.opts["gather_job_timeout"],
+ **self.eauth
+ )
# add it to our iterators and to the minion_tracker
iters.append(new_iter)
minion_tracker[new_iter] = {}
diff --git a/tests/unit/cli/test_batch.py b/tests/unit/cli/test_batch.py
index acabbe51f5..d7411e8039 100644
--- a/tests/unit/cli/test_batch.py
+++ b/tests/unit/cli/test_batch.py
@@ -72,3 +72,65 @@ class BatchTestCase(TestCase):
'''
ret = Batch.get_bnum(self.batch)
self.assertEqual(ret, None)
+
+ def test_return_value_in_run_for_ret(self):
+ """
+ cmd_iter_no_block should have been called with a return no matter if
+ the return value was in ret or return.
+ """
+ self.batch.opts = {
+ "batch": "100%",
+ "timeout": 5,
+ "fun": "test",
+ "arg": "foo",
+ "gather_job_timeout": 5,
+ "ret": "my_return",
+ }
+ self.batch.minions = ["foo", "bar", "baz"]
+ self.batch.local.cmd_iter_no_block = MagicMock(return_value=iter([]))
+ ret = Batch.run(self.batch)
+ # We need to fetch at least one object to trigger the relevant code path.
+ x = next(ret)
+ self.batch.local.cmd_iter_no_block.assert_called_with(
+ ["baz", "bar", "foo"],
+ "test",
+ "foo",
+ 5,
+ "list",
+ raw=False,
+ ret="my_return",
+ show_jid=False,
+ verbose=False,
+ gather_job_timeout=5,
+ )
+
+ def test_return_value_in_run_for_return(self):
+ """
+ cmd_iter_no_block should have been called with a return no matter if
+ the return value was in ret or return.
+ """
+ self.batch.opts = {
+ "batch": "100%",
+ "timeout": 5,
+ "fun": "test",
+ "arg": "foo",
+ "gather_job_timeout": 5,
+ "return": "my_return",
+ }
+ self.batch.minions = ["foo", "bar", "baz"]
+ self.batch.local.cmd_iter_no_block = MagicMock(return_value=iter([]))
+ ret = Batch.run(self.batch)
+ # We need to fetch at least one object to trigger the relevant code path.
+ x = next(ret)
+ self.batch.local.cmd_iter_no_block.assert_called_with(
+ ["baz", "bar", "foo"],
+ "test",
+ "foo",
+ 5,
+ "list",
+ raw=False,
+ ret="my_return",
+ show_jid=False,
+ verbose=False,
+ gather_job_timeout=5,
+ )
--
2.26.1
++++++ fix-for-suse-expanded-support-detection.patch ++++++
>From 16d656744d2e7d915757d6f2ae26b57ad8230b0b Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Thu, 6 Sep 2018 17:15:18 +0200
Subject: [PATCH] Fix for SUSE Expanded Support detection
A SUSE ES installation has both, the centos-release and redhat-release
file. Since os_data only used the centos-release file to detect a
CentOS installation, this lead to SUSE ES being detected as CentOS.
This change also adds a check for redhat-release and then marks the
'lsb_distrib_id' as RedHat.
---
salt/grains/core.py | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/salt/grains/core.py b/salt/grains/core.py
index 9b244def9c..2851809472 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -1892,6 +1892,15 @@ def os_data():
log.trace('Parsing distrib info from /etc/centos-release')
# CentOS Linux
grains['lsb_distrib_id'] = 'CentOS'
+ # Maybe CentOS Linux; could also be SUSE Expanded Support.
+ # SUSE ES has both, centos-release and redhat-release.
+ if os.path.isfile('/etc/redhat-release'):
+ with salt.utils.files.fopen('/etc/redhat-release') as ifile:
+ for line in ifile:
+ if "red hat enterprise linux server" in line.lower():
+ # This is a SUSE Expanded Support Rhel installation
+ grains['lsb_distrib_id'] = 'RedHat'
+ break
with salt.utils.files.fopen('/etc/centos-release') as ifile:
for line in ifile:
# Need to pull out the version and codename
--
2.16.4
++++++ fix-for-temp-folder-definition-in-loader-unit-test.patch ++++++
>From dd01a0fc594f024eee2267bed2f698f5a6c729bf Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Mon, 16 Mar 2020 15:25:42 +0100
Subject: [PATCH] Fix for temp folder definition in loader unit test
---
tests/unit/test_loader.py | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/tests/unit/test_loader.py b/tests/unit/test_loader.py
index fe11cd0681..7e369f2c3b 100644
--- a/tests/unit/test_loader.py
+++ b/tests/unit/test_loader.py
@@ -152,12 +152,12 @@ class LazyLoaderUtilsTest(TestCase):
def setUpClass(cls):
cls.opts = salt.config.minion_config(None)
cls.opts['grains'] = salt.loader.grains(cls.opts)
- if not os.path.isdir(TMP):
- os.makedirs(TMP)
+ if not os.path.isdir(RUNTIME_VARS.TMP):
+ os.makedirs(RUNTIME_VARS.TMP)
def setUp(self):
# Setup the module
- self.module_dir = tempfile.mkdtemp(dir=TMP)
+ self.module_dir = tempfile.mkdtemp(dir=RUNTIME_VARS.TMP)
self.module_file = os.path.join(self.module_dir,
'{}.py'.format(self.module_name))
with salt.utils.files.fopen(self.module_file, 'w') as fh:
@@ -165,7 +165,7 @@ class LazyLoaderUtilsTest(TestCase):
fh.flush()
os.fsync(fh.fileno())
- self.utils_dir = tempfile.mkdtemp(dir=TMP)
+ self.utils_dir = tempfile.mkdtemp(dir=RUNTIME_VARS.TMP)
self.utils_file = os.path.join(self.utils_dir,
'{}.py'.format(self.utils_name))
with salt.utils.files.fopen(self.utils_file, 'w') as fh:
--
2.16.4
++++++ fix-for-unless-requisite-when-pip-is-not-installed.patch ++++++
>From f8b490c26be8e7f76947cc07f606f95c133805a7 Mon Sep 17 00:00:00 2001
From: "Daniel A. Wozniak" <dwozniak(a)saltstack.com>
Date: Thu, 20 Feb 2020 21:07:07 +0000
Subject: [PATCH] Fix for unless requisite when pip is not installed
Only remove pip relasted modules
---
salt/states/pip_state.py | 17 ++---------------
1 file changed, 2 insertions(+), 15 deletions(-)
diff --git a/salt/states/pip_state.py b/salt/states/pip_state.py
index 0f762752d02bf5ced17928a4c7fd2a3f027b66d5..11e466389fc46574923e2a71d8ca06f2c411f369 100644
--- a/salt/states/pip_state.py
+++ b/salt/states/pip_state.py
@@ -51,7 +51,7 @@ def purge_pip():
return
pip_related_entries = [
(k, v) for (k, v) in sys.modules.items()
- or getattr(v, '__module__', '').startswith('pip.')
+ if getattr(v, '__module__', '').startswith('pip.')
or (isinstance(v, types.ModuleType) and v.__name__.startswith('pip.'))
]
for name, entry in pip_related_entries:
@@ -96,21 +96,8 @@ try:
HAS_PIP = True
except ImportError:
HAS_PIP = False
- # Remove references to the loaded pip module above so reloading works
- import sys
- pip_related_entries = [
- (k, v) for (k, v) in sys.modules.items()
- or getattr(v, '__module__', '').startswith('pip.')
- or (isinstance(v, types.ModuleType) and v.__name__.startswith('pip.'))
- ]
- for name, entry in pip_related_entries:
- sys.modules.pop(name)
- del entry
+ purge_pip()
- del pip
- sys_modules_pip = sys.modules.pop('pip', None)
- if sys_modules_pip is not None:
- del sys_modules_pip
if HAS_PIP is True:
if not hasattr(purge_pip, '__pip_ver__'):
--
2.23.0
++++++ fix-git_pillar-merging-across-multiple-__env__-repos.patch ++++++
>From 900d63bc5e85496e16373025457561b405f2329f Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Tue, 6 Nov 2018 16:38:54 +0000
Subject: [PATCH] Fix git_pillar merging across multiple __env__
repositories (bsc#1112874)
Resolve target branch when using __env__
Test git ext_pillar across multiple repos using __env__
Remove unicode references
---
tests/integration/pillar/test_git_pillar.py | 45 +++++++++++++++++++++++++++++
1 file changed, 45 insertions(+)
diff --git a/tests/integration/pillar/test_git_pillar.py b/tests/integration/pillar/test_git_pillar.py
index 2e549f3948..d417a7ebc3 100644
--- a/tests/integration/pillar/test_git_pillar.py
+++ b/tests/integration/pillar/test_git_pillar.py
@@ -1382,6 +1382,51 @@ class TestPygit2SSH(GitPillarSSHTestBase):
'nested_dict': {'master': True}}}
)
+
+@skipIf(NO_MOCK, NO_MOCK_REASON)
+@skipIf(_windows_or_mac(), 'minion is windows or mac')
+@skip_if_not_root
+@skipIf(not HAS_PYGIT2, 'pygit2 >= {0} and libgit2 >= {1} required'.format(PYGIT2_MINVER, LIBGIT2_MINVER))
+@skipIf(not HAS_NGINX, 'nginx not present')
+@skipIf(not HAS_VIRTUALENV, 'virtualenv not present')
+class TestPygit2HTTP(GitPillarHTTPTestBase):
+ '''
+ Test git_pillar with pygit2 using SSH authentication
+ '''
+ def test_single_source(self):
+ '''
+ Test with git_pillar_includes enabled and using "__env__" as the branch
+ name for the configured repositories.
+ The "gitinfo" repository contains top.sls file with a local reference
+ and also referencing external "nowhere.foo" which is provided by "webinfo"
+ repository mounted as "nowhere".
+ '''
+ ret = self.get_pillar('''\
+ file_ignore_regex: []
+ file_ignore_glob: []
+ git_pillar_provider: pygit2
+ git_pillar_pubkey: {pubkey_nopass}
+ git_pillar_privkey: {privkey_nopass}
+ cachedir: {cachedir}
+ extension_modules: {extmods}
+ ext_pillar:
+ - git:
+ - __env__ {url_extra_repo}:
+ - name: gitinfo
+ - __env__ {url}:
+ - name: webinfo
+ - mountpoint: nowhere
+ ''')
+ self.assertEqual(
+ ret,
+ {'branch': 'master',
+ 'motd': 'The force will be with you. Always.',
+ 'mylist': ['master'],
+ 'mydict': {'master': True,
+ 'nested_list': ['master'],
+ 'nested_dict': {'master': True}}}
+ )
+
@requires_system_grains
def test_root_parameter(self, grains):
'''
--
2.16.4
++++++ fix-ipv6-scope-bsc-1108557.patch ++++++
>From 2cb7515f83e2c358b84724e4eb581daa78012fdf Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Fri, 28 Sep 2018 15:22:33 +0200
Subject: [PATCH] Fix IPv6 scope (bsc#1108557)
Fix ipaddress imports
Remove unused import
Fix ipaddress import
Fix unicode imports in compat
Override standard IPv6Address class
Check version via object
Isolate Py2 and Py3 mode
Add logging
Add debugging to the ip_address method (py2 and py3)
Remove multiple returns and add check for address syntax
Remove unnecessary variable for import detection
Remove duplicated code
Remove unnecessary operator
Remove multiple returns
Use ternary operator instead
Remove duplicated code
Move docstrings to their native places
Add real exception message
Add logging to the ip_interface
Add scope on str
Lintfix: mute not called constructors
Add extra detection for hexadecimal packed bytes on Python2. This cannot be detected with type comparison, because bytes == str and at the same time bytes != str if compatibility is not around
Fix py2 case where the same class cannot initialise itself on Python2 via super.
Simplify checking clause
Do not use introspection for method swap
Fix wrong type swap
Add Py3.4 old implementation's fix
Lintfix
Lintfix refactor: remove duplicate returns as not needed
Revert method remapping with pylint updates
Remove unnecessary manipulation with IPv6 scope outside of the IPv6Address object instance
Lintfix: W0611
Reverse skipping tests: if no ipaddress
---
salt/_compat.py | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 74 insertions(+)
diff --git a/salt/_compat.py b/salt/_compat.py
index e999605d2c..965bb90da3 100644
--- a/salt/_compat.py
+++ b/salt/_compat.py
@@ -230,7 +230,81 @@ class IPv6InterfaceScoped(ipaddress.IPv6Interface, IPv6AddressScoped):
self.hostmask = self.network.hostmask
+def ip_address(address):
+ """Take an IP string/int and return an object of the correct type.
+
+ Args:
+ address: A string or integer, the IP address. Either IPv4 or
+ IPv6 addresses may be supplied; integers less than 2**32 will
+ be considered to be IPv4 by default.
+
+ Returns:
+ An IPv4Address or IPv6Address object.
+
+ Raises:
+ ValueError: if the *address* passed isn't either a v4 or a v6
+ address
+
+ """
+ try:
+ return ipaddress.IPv4Address(address)
+ except (ipaddress.AddressValueError, ipaddress.NetmaskValueError) as err:
+ log.debug('Error while parsing IPv4 address: %s', address)
+ log.debug(err)
+
+ try:
+ return IPv6AddressScoped(address)
+ except (ipaddress.AddressValueError, ipaddress.NetmaskValueError) as err:
+ log.debug('Error while parsing IPv6 address: %s', address)
+ log.debug(err)
+
+ if isinstance(address, bytes):
+ raise ipaddress.AddressValueError('{} does not appear to be an IPv4 or IPv6 address. '
+ 'Did you pass in a bytes (str in Python 2) instead '
+ 'of a unicode object?'.format(repr(address)))
+
+ raise ValueError('{} does not appear to be an IPv4 or IPv6 address'.format(repr(address)))
+
+
+def ip_interface(address):
+ """Take an IP string/int and return an object of the correct type.
+
+ Args:
+ address: A string or integer, the IP address. Either IPv4 or
+ IPv6 addresses may be supplied; integers less than 2**32 will
+ be considered to be IPv4 by default.
+
+ Returns:
+ An IPv4Interface or IPv6Interface object.
+
+ Raises:
+ ValueError: if the string passed isn't either a v4 or a v6
+ address.
+
+ Notes:
+ The IPv?Interface classes describe an Address on a particular
+ Network, so they're basically a combination of both the Address
+ and Network classes.
+
+ """
+ try:
+ return ipaddress.IPv4Interface(address)
+ except (ipaddress.AddressValueError, ipaddress.NetmaskValueError) as err:
+ log.debug('Error while getting IPv4 interface for address %s', address)
+ log.debug(err)
+
+ try:
+ return ipaddress.IPv6Interface(address)
+ except (ipaddress.AddressValueError, ipaddress.NetmaskValueError) as err:
+ log.debug('Error while getting IPv6 interface for address %s', address)
+ log.debug(err)
+
+ raise ValueError('{} does not appear to be an IPv4 or IPv6 interface'.format(address))
+
+
if ipaddress:
ipaddress.IPv6Address = IPv6AddressScoped
if sys.version_info.major == 2:
ipaddress.IPv6Interface = IPv6InterfaceScoped
+ ipaddress.ip_address = ip_address
+ ipaddress.ip_interface = ip_interface
--
2.16.4
++++++ fix-issue-2068-test.patch ++++++
>From bfdd7f946d56d799e89b33f7e3b72426732b0195 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Wed, 9 Jan 2019 16:08:19 +0100
Subject: [PATCH] Fix issue #2068 test
Skip injecting `__call__` if chunk is not dict.
This also fixes `integration/modules/test_state.py:StateModuleTest.test_exclude` that tests `include` and `exclude` state directives containing the only list of strings.
Minor update: more correct is-dict check.
---
salt/state.py | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/salt/state.py b/salt/state.py
index bc5277554e..2fa5f64ca5 100644
--- a/salt/state.py
+++ b/salt/state.py
@@ -25,6 +25,7 @@ import traceback
import re
import time
import random
+import collections
# Import salt libs
import salt.loader
@@ -2896,16 +2897,18 @@ class State(object):
'''
for chunk in high:
state = high[chunk]
+ if not isinstance(state, collections.Mapping):
+ continue
for state_ref in state:
needs_default = True
+ if not isinstance(state[state_ref], list):
+ continue
for argset in state[state_ref]:
if isinstance(argset, six.string_types):
needs_default = False
break
if needs_default:
- order = state[state_ref].pop(-1)
- state[state_ref].append('__call__')
- state[state_ref].append(order)
+ state[state_ref].insert(-1, '__call__')
def call_high(self, high, orchestration_jid=None):
'''
--
2.16.4
++++++ fix-memory-leak-produced-by-batch-async-find_jobs-me.patch ++++++
>From 77d53d9567b7aec045a8fffd29afcb76a8405caf Mon Sep 17 00:00:00 2001
From: Mihai Dinca <mdinca(a)suse.de>
Date: Mon, 16 Sep 2019 11:27:30 +0200
Subject: [PATCH] Fix memory leak produced by batch async find_jobs
mechanism (bsc#1140912)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Multiple fixes:
- use different JIDs per find_job
- fix bug in detection of find_job returns
- fix timeout passed from request payload
- better cleanup at the end of batching
Co-authored-by: Pablo Suárez Hernández <psuarezhernandez(a)suse.com>
---
salt/cli/batch_async.py | 60 ++++++++++++++++++++++++++++++++-----------------
salt/client/__init__.py | 1 +
salt/master.py | 1 -
3 files changed, 41 insertions(+), 21 deletions(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index 8c8f481e34..8a67331102 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -72,6 +72,7 @@ class BatchAsync(object):
self.done_minions = set()
self.active = set()
self.initialized = False
+ self.jid_gen = jid_gen
self.ping_jid = jid_gen()
self.batch_jid = jid_gen()
self.find_job_jid = jid_gen()
@@ -89,14 +90,11 @@ class BatchAsync(object):
def __set_event_handler(self):
ping_return_pattern = 'salt/job/{0}/ret/*'.format(self.ping_jid)
batch_return_pattern = 'salt/job/{0}/ret/*'.format(self.batch_jid)
- find_job_return_pattern = 'salt/job/{0}/ret/*'.format(self.find_job_jid)
self.event.subscribe(ping_return_pattern, match_type='glob')
self.event.subscribe(batch_return_pattern, match_type='glob')
- self.event.subscribe(find_job_return_pattern, match_type='glob')
- self.event.patterns = {
+ self.patterns = {
(ping_return_pattern, 'ping_return'),
(batch_return_pattern, 'batch_run'),
- (find_job_return_pattern, 'find_job_return')
}
self.event.set_event_handler(self.__event_handler)
@@ -104,7 +102,7 @@ class BatchAsync(object):
if not self.event:
return
mtag, data = self.event.unpack(raw, self.event.serial)
- for (pattern, op) in self.event.patterns:
+ for (pattern, op) in self.patterns:
if fnmatch.fnmatch(mtag, pattern):
minion = data['id']
if op == 'ping_return':
@@ -112,7 +110,8 @@ class BatchAsync(object):
if self.targeted_minions == self.minions:
self.event.io_loop.spawn_callback(self.start_batch)
elif op == 'find_job_return':
- self.find_job_returned.add(minion)
+ if data.get("return", None):
+ self.find_job_returned.add(minion)
elif op == 'batch_run':
if minion in self.active:
self.active.remove(minion)
@@ -131,31 +130,46 @@ class BatchAsync(object):
return set(list(to_run)[:next_batch_size])
@tornado.gen.coroutine
- def check_find_job(self, batch_minions):
+ def check_find_job(self, batch_minions, jid):
+ find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid)
+ self.event.unsubscribe(find_job_return_pattern, match_type='glob')
+ self.patterns.remove((find_job_return_pattern, "find_job_return"))
+
timedout_minions = batch_minions.difference(self.find_job_returned).difference(self.done_minions)
self.timedout_minions = self.timedout_minions.union(timedout_minions)
self.active = self.active.difference(self.timedout_minions)
running = batch_minions.difference(self.done_minions).difference(self.timedout_minions)
+
if timedout_minions:
self.schedule_next()
+
if running:
+ self.find_job_returned = self.find_job_returned.difference(running)
self.event.io_loop.add_callback(self.find_job, running)
@tornado.gen.coroutine
def find_job(self, minions):
- not_done = minions.difference(self.done_minions)
- ping_return = yield self.local.run_job_async(
- not_done,
- 'saltutil.find_job',
- [self.batch_jid],
- 'list',
- gather_job_timeout=self.opts['gather_job_timeout'],
- jid=self.find_job_jid,
- **self.eauth)
- self.event.io_loop.call_later(
- self.opts['gather_job_timeout'],
- self.check_find_job,
- not_done)
+ not_done = minions.difference(self.done_minions).difference(self.timedout_minions)
+
+ if not_done:
+ jid = self.jid_gen()
+ find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid)
+ self.patterns.add((find_job_return_pattern, "find_job_return"))
+ self.event.subscribe(find_job_return_pattern, match_type='glob')
+
+ ret = yield self.local.run_job_async(
+ not_done,
+ 'saltutil.find_job',
+ [self.batch_jid],
+ 'list',
+ gather_job_timeout=self.opts['gather_job_timeout'],
+ jid=jid,
+ **self.eauth)
+ self.event.io_loop.call_later(
+ self.opts['gather_job_timeout'],
+ self.check_find_job,
+ not_done,
+ jid)
@tornado.gen.coroutine
def start(self):
@@ -203,6 +217,9 @@ class BatchAsync(object):
}
self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid))
self.event.remove_event_handler(self.__event_handler)
+ for (pattern, label) in self.patterns:
+ if label in ["ping_return", "batch_run"]:
+ self.event.unsubscribe(pattern, match_type='glob')
def schedule_next(self):
if not self.scheduled:
@@ -226,9 +243,12 @@ class BatchAsync(object):
gather_job_timeout=self.opts['gather_job_timeout'],
jid=self.batch_jid,
metadata=self.metadata)
+
self.event.io_loop.call_later(self.opts['timeout'], self.find_job, set(next_batch))
except Exception as ex:
+ log.error("Error in scheduling next batch: %s", ex)
self.active = self.active.difference(next_batch)
else:
self.end_batch()
self.scheduled = False
+ yield
diff --git a/salt/client/__init__.py b/salt/client/__init__.py
index 3bbc7f9de7..a48d79ef8d 100644
--- a/salt/client/__init__.py
+++ b/salt/client/__init__.py
@@ -1622,6 +1622,7 @@ class LocalClient(object):
'key': self.key,
'tgt_type': tgt_type,
'ret': ret,
+ 'timeout': timeout,
'jid': jid}
# if kwargs are passed, pack them.
diff --git a/salt/master.py b/salt/master.py
index 5e2277ba76..3abf7ae60b 100644
--- a/salt/master.py
+++ b/salt/master.py
@@ -2044,7 +2044,6 @@ class ClearFuncs(object):
def publish_batch(self, clear_load, minions, missing):
batch_load = {}
batch_load.update(clear_load)
- import salt.cli.batch_async
batch = salt.cli.batch_async.BatchAsync(
self.local.opts,
functools.partial(self._prep_jid, clear_load, {}),
--
2.16.4
++++++ fix-regression-in-service-states-with-reload-argumen.patch ++++++
>From 1a3e69af7c69a4893642dd1e9a9c4d3eb99cf874 Mon Sep 17 00:00:00 2001
From: Erik Johnson <erik.johnson(a)level3.com>
Date: Mon, 17 Feb 2020 18:43:06 -0600
Subject: [PATCH] Fix regression in service states with reload argument
Add functional test
Fix failing test
Add __opts__ and __grains__ just in case
Skip on OSX for now
---
salt/states/service.py | 2 +-
tests/unit/states/test_service.py | 79 +++++++++++++++++++++++++++++--
2 files changed, 75 insertions(+), 6 deletions(-)
diff --git a/salt/states/service.py b/salt/states/service.py
index 987e37cd421713313c41db4459b85019c041d549..89afa0dfa625e9ee3d9ecd7566232452d79ca99c 100644
--- a/salt/states/service.py
+++ b/salt/states/service.py
@@ -488,7 +488,7 @@ def running(name,
time.sleep(init_delay)
# only force a change state if we have explicitly detected them
- after_toggle_status = __salt__['service.status'](name, sig, **kwargs)
+ after_toggle_status = __salt__['service.status'](name, sig, **status_kwargs)
if 'service.enabled' in __salt__:
after_toggle_enable_status = __salt__['service.enabled'](name)
else:
diff --git a/tests/unit/states/test_service.py b/tests/unit/states/test_service.py
index 30c716025495f537efddf69bf6df8c68bc938e2e..3eead4c3576eefdd8d96eec4cc113edf194ebbc6 100644
--- a/tests/unit/states/test_service.py
+++ b/tests/unit/states/test_service.py
@@ -7,14 +7,15 @@
from __future__ import absolute_import, print_function, unicode_literals
# Import Salt Testing Libs
+from tests.support.helpers import destructiveTest
from tests.support.mixins import LoaderModuleMockMixin
-from tests.support.unit import TestCase
-from tests.support.mock import (
- MagicMock,
- patch,
-)
+from tests.support.unit import TestCase, skipIf
+from tests.support.mock import MagicMock, patch
# Import Salt Libs
+import salt.utils.platform
+import salt.config
+import salt.loader
import salt.states.service as service
@@ -251,3 +252,71 @@ class ServiceTestCase(TestCase, LoaderModuleMockMixin):
ret[3])
self.assertDictEqual(service.mod_watch("salt", "stack"), ret[1])
+
+
+@destructiveTest
+(a)skipIf(salt.utils.platform.is_darwin(), "service.running is currently failing on OSX")
+class ServiceTestCaseFunctional(TestCase, LoaderModuleMockMixin):
+ '''
+ Validate the service state
+ '''
+ def setup_loader_modules(self):
+ self.opts = salt.config.DEFAULT_MINION_OPTS.copy()
+ self.opts['grains'] = salt.loader.grains(self.opts)
+ self.utils = salt.loader.utils(self.opts)
+ self.modules = salt.loader.minion_mods(self.opts, utils=self.utils)
+
+ self.service_name = 'cron'
+ cmd_name = 'crontab'
+ os_family = self.opts['grains']['os_family']
+ os_release = self.opts['grains']['osrelease']
+ if os_family == 'RedHat':
+ self.service_name = 'crond'
+ elif os_family == 'Arch':
+ self.service_name = 'sshd'
+ cmd_name = 'systemctl'
+ elif os_family == 'MacOS':
+ self.service_name = 'org.ntp.ntpd'
+ if int(os_release.split('.')[1]) >= 13:
+ self.service_name = 'com.openssh.sshd'
+ elif os_family == 'Windows':
+ self.service_name = 'Spooler'
+
+ if os_family != 'Windows' and salt.utils.path.which(cmd_name) is None:
+ self.skipTest('{0} is not installed'.format(cmd_name))
+
+ return {
+ service: {
+ '__grains__': self.opts['grains'],
+ '__opts__': self.opts,
+ '__salt__': self.modules,
+ '__utils__': self.utils,
+ },
+ }
+
+ def setUp(self):
+ self.pre_srv_enabled = True if self.service_name in self.modules['service.get_enabled']() else False
+ self.post_srv_disable = False
+ if not self.pre_srv_enabled:
+ self.modules['service.enable'](self.service_name)
+ self.post_srv_disable = True
+
+ def tearDown(self):
+ if self.post_srv_disable:
+ self.modules['service.disable'](self.service_name)
+
+ def test_running_with_reload(self):
+ with patch.dict(service.__opts__, {'test': False}):
+ service.dead(self.service_name, enable=False)
+ result = service.running(name=self.service_name, enable=True, reload=False)
+
+ expected = {
+ 'changes': {
+ self.service_name: True
+ },
+ 'comment': 'Service {0} has been enabled, and is '
+ 'running'.format(self.service_name),
+ 'name': self.service_name,
+ 'result': True
+ }
+ self.assertDictEqual(result, expected)
--
2.23.0
++++++ fix-type-error-in-tornadoimporter.patch ++++++
>From adf61956bbebeee3a64a6bfec81206bb2663ba13 Mon Sep 17 00:00:00 2001
From: "Daniel A. Wozniak" <dwozniak(a)saltstack.com>
Date: Sat, 8 Feb 2020 02:08:04 +0000
Subject: [PATCH] Fix type error in TornadoImporter
---
salt/__init__.py | 2 +-
tests/unit/test_ext.py | 8 ++++++++
2 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/salt/__init__.py b/salt/__init__.py
index 3e99d2439a..117523b1d2 100644
--- a/salt/__init__.py
+++ b/salt/__init__.py
@@ -12,7 +12,7 @@ import importlib
class TornadoImporter(object):
- def find_module(self, module_name, package_path):
+ def find_module(self, module_name, package_path=None):
if module_name.startswith('tornado'):
return self
return None
diff --git a/tests/unit/test_ext.py b/tests/unit/test_ext.py
index 1cd8572086..3080147d9e 100644
--- a/tests/unit/test_ext.py
+++ b/tests/unit/test_ext.py
@@ -14,6 +14,7 @@ from tests.support.runtests import RUNTIME_VARS
import tests.support.helpers
# Import Salt libs
+import salt
import salt.ext.six
import salt.modules.cmdmod
import salt.utils.platform
@@ -95,3 +96,10 @@ class VendorTornadoTest(TestCase):
log.error("Test found bad line: %s", line)
valid_lines.append(line)
assert valid_lines == [], len(valid_lines)
+
+ def test_regression_56063(self):
+ importer = salt.TornadoImporter()
+ try:
+ importer.find_module('tornado')
+ except TypeError:
+ assert False, 'TornadoImporter raised type error when one argument passed'
--
2.27.0
++++++ fix-typo-on-msgpack-version-when-sanitizing-msgpack-.patch ++++++
>From 5a2c7671be0fcdf03050049ac4a1bbf4929abf1e Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Fri, 27 Mar 2020 15:58:40 +0000
Subject: [PATCH] Fix typo on msgpack version when sanitizing msgpack
kwargs (bsc#1167437)
---
salt/utils/msgpack.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/salt/utils/msgpack.py b/salt/utils/msgpack.py
index 1d02aa96ba8b659eb4038f00563c9cfc31a568e5..4b5a256513a524a33d7d42773644567a0970a46b 100644
--- a/salt/utils/msgpack.py
+++ b/salt/utils/msgpack.py
@@ -61,7 +61,7 @@ def _sanitize_msgpack_kwargs(kwargs):
assert isinstance(kwargs, dict)
if version < (0, 6, 0) and kwargs.pop('strict_map_key', None) is not None:
log.info('removing unsupported `strict_map_key` argument from msgpack call')
- if version < (0, 5, 5) and kwargs.pop('raw', None) is not None:
+ if version < (0, 5, 2) and kwargs.pop('raw', None) is not None:
log.info('removing unsupported `raw` argument from msgpack call')
if version < (0, 4, 0) and kwargs.pop('use_bin_type', None) is not None:
log.info('removing unsupported `use_bin_type` argument from msgpack call')
--
2.23.0
++++++ fix-unit-test-for-grains-core.patch ++++++
>From 6bb7b6c4a530abb7e831449545a35ee5ede49dcb Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Thu, 11 Oct 2018 16:20:40 +0200
Subject: [PATCH] Fix unit test for grains core
---
tests/unit/grains/test_core.py | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py
index b31f5dcddd..c40595eb3f 100644
--- a/tests/unit/grains/test_core.py
+++ b/tests/unit/grains/test_core.py
@@ -68,11 +68,10 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
def test_parse_etc_os_release(self, path_isfile_mock):
path_isfile_mock.side_effect = lambda x: x == "/usr/lib/os-release"
with salt.utils.files.fopen(os.path.join(OS_RELEASE_DIR, "ubuntu-17.10")) as os_release_file:
- os_release_content = os_release_file.read()
- with patch("salt.utils.files.fopen", mock_open(read_data=os_release_content)):
- os_release = core._parse_os_release(
- '/etc/os-release',
- '/usr/lib/os-release')
+ os_release_content = os_release_file.readlines()
+ with patch("salt.utils.files.fopen", mock_open()) as os_release_file:
+ os_release_file.return_value.__iter__.return_value = os_release_content
+ os_release = core._parse_os_release(["/etc/os-release", "/usr/lib/os-release"])
self.assertEqual(os_release, {
"NAME": "Ubuntu",
"VERSION": "17.10 (Artful Aardvark)",
@@ -134,7 +133,7 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
def test_missing_os_release(self):
with patch('salt.utils.files.fopen', mock_open(read_data={})):
- os_release = core._parse_os_release('/etc/os-release', '/usr/lib/os-release')
+ os_release = core._parse_os_release(['/etc/os-release', '/usr/lib/os-release'])
self.assertEqual(os_release, {})
@skipIf(not salt.utils.platform.is_windows(), 'System is not Windows')
--
2.16.4
++++++ fix-unit-tests-for-batch-async-after-refactor.patch ++++++
>From e9f2af1256a52d58a7c8e6dd0122eb6d5cc47dd3 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 4 Mar 2020 10:13:43 +0000
Subject: [PATCH] Fix unit tests for batch async after refactor
---
tests/unit/cli/test_batch_async.py | 18 +++++++++++++++++-
1 file changed, 17 insertions(+), 1 deletion(-)
diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py
index f1d36a81fb..e1ce60859b 100644
--- a/tests/unit/cli/test_batch_async.py
+++ b/tests/unit/cli/test_batch_async.py
@@ -126,9 +126,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
self.batch.timedout_minions = {'bar'}
self.batch.event = MagicMock()
self.batch.metadata = {'mykey': 'myvalue'}
+ old_event = self.batch.event
self.batch.end_batch()
self.assertEqual(
- self.batch.event.fire_event.call_args[0],
+ old_event.fire_event.call_args[0],
(
{
'available_minions': set(['foo', 'bar']),
@@ -146,6 +147,21 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase):
event = MagicMock()
batch.event = event
batch.__del__()
+ self.assertEqual(batch.local, None)
+ self.assertEqual(batch.event, None)
+ self.assertEqual(batch.ioloop, None)
+
+ def test_batch_close_safe(self):
+ batch = BatchAsync(MagicMock(), MagicMock(), MagicMock())
+ event = MagicMock()
+ batch.event = event
+ batch.patterns = { ('salt/job/1234/ret/*', 'find_job_return'), ('salt/job/4321/ret/*', 'find_job_return') }
+ batch.close_safe()
+ self.assertEqual(batch.local, None)
+ self.assertEqual(batch.event, None)
+ self.assertEqual(batch.ioloop, None)
+ self.assertEqual(
+ len(event.unsubscribe.mock_calls), 2)
self.assertEqual(
len(event.remove_event_handler.mock_calls), 1)
--
2.23.0
++++++ fix-wrong-test_mod_del_repo_multiline_values-test-af.patch ++++++
>From a8f0a15e4067ec278c8a2d690e3bf815523286ca Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Thu, 12 Mar 2020 13:26:51 +0000
Subject: [PATCH] Fix wrong test_mod_del_repo_multiline_values test after
rebase
---
tests/integration/modules/test_pkg.py | 56 +++------------------------
1 file changed, 6 insertions(+), 50 deletions(-)
diff --git a/tests/integration/modules/test_pkg.py b/tests/integration/modules/test_pkg.py
index 6f3767bfbd272848277b877d1fe640caf8f349f6..0f4c5c9d459c56bb485408f943c1dee49c46cd21 100644
--- a/tests/integration/modules/test_pkg.py
+++ b/tests/integration/modules/test_pkg.py
@@ -134,6 +134,10 @@ class PkgModuleTest(ModuleCase, SaltReturnAssertsMixin):
if repo is not None:
self.run_function('pkg.del_repo', [repo])
+ @destructiveTest
+ @requires_salt_modules('pkg.mod_repo', 'pkg.del_repo', 'pkg.get_repo')
+ @requires_network()
+ @requires_system_grains
def test_mod_del_repo_multiline_values(self):
'''
test modifying and deleting a software repository defined with multiline values
@@ -141,8 +145,9 @@ class PkgModuleTest(ModuleCase, SaltReturnAssertsMixin):
os_grain = self.run_function('grains.item', ['os'])['os']
repo = None
try:
- if os_grain in ['CentOS', 'RedHat']:
+ if os_grain in ['CentOS', 'RedHat', 'SUSE']:
my_baseurl = 'http://my.fake.repo/foo/bar/\n http://my.fake.repo.alt/foo/bar/'
+ expected_get_repo_baseurl_zypp = 'http://my.fake.repo/foo/bar/%0A%20http://my.fake.repo.alt/foo/bar/'
expected_get_repo_baseurl = 'http://my.fake.repo/foo/bar/\nhttp://my.fake.repo.alt/foo/bar/'
major_release = int(
self.run_function(
@@ -189,55 +194,6 @@ class PkgModuleTest(ModuleCase, SaltReturnAssertsMixin):
if repo is not None:
self.run_function('pkg.del_repo', [repo])
- def test_mod_del_repo_multiline_values(self):
- '''
- test modifying and deleting a software repository defined with multiline values
- '''
- os_grain = self.run_function('grains.item', ['os'])['os']
- repo = None
- try:
- if os_grain in ['CentOS', 'RedHat', 'SUSE']:
- my_baseurl = 'http://my.fake.repo/foo/bar/\n http://my.fake.repo.alt/foo/bar/'
- expected_get_repo_baseurl_zypp = 'http://my.fake.repo/foo/bar/%0A%20http://my.fake.repo.alt/foo/bar/'
- expected_get_repo_baseurl = 'http://my.fake.repo/foo/bar/\nhttp://my.fake.repo.alt/foo/bar/'
- major_release = int(
- self.run_function(
- 'grains.item',
- ['osmajorrelease']
- )['osmajorrelease']
- )
- repo = 'fakerepo'
- name = 'Fake repo for RHEL/CentOS/SUSE'
- baseurl = my_baseurl
- gpgkey = 'https://my.fake.repo/foo/bar/MY-GPG-KEY.pub'
- failovermethod = 'priority'
- gpgcheck = 1
- enabled = 1
- ret = self.run_function(
- 'pkg.mod_repo',
- [repo],
- name=name,
- baseurl=baseurl,
- gpgkey=gpgkey,
- gpgcheck=gpgcheck,
- enabled=enabled,
- failovermethod=failovermethod,
- )
- # return data from pkg.mod_repo contains the file modified at
- # the top level, so use next(iter(ret)) to get that key
- self.assertNotEqual(ret, {})
- repo_info = ret[next(iter(ret))]
- self.assertIn(repo, repo_info)
- self.assertEqual(repo_info[repo]['baseurl'], my_baseurl)
- ret = self.run_function('pkg.get_repo', [repo])
- self.assertEqual(ret['baseurl'], expected_get_repo_baseurl)
- self.run_function('pkg.mod_repo', [repo])
- ret = self.run_function('pkg.get_repo', [repo])
- self.assertEqual(ret['baseurl'], expected_get_repo_baseurl)
- finally:
- if repo is not None:
- self.run_function('pkg.del_repo', [repo])
-
@requires_salt_modules('pkg.owner')
def test_owner(self):
'''
--
2.23.0
++++++ fix-zypper-pkg.list_pkgs-expectation-and-dpkg-mockin.patch ++++++
>From eb51734ad93b1fa0c6bc8fde861fdabfe3e0d6b0 Mon Sep 17 00:00:00 2001
From: Mihai Dinca <mdinca(a)suse.de>
Date: Thu, 13 Jun 2019 17:48:55 +0200
Subject: [PATCH] Fix zypper pkg.list_pkgs expectation and dpkg mocking
---
tests/unit/modules/test_dpkg_lowpkg.py | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/tests/unit/modules/test_dpkg_lowpkg.py b/tests/unit/modules/test_dpkg_lowpkg.py
index a0b3346f9d..bc564f080a 100644
--- a/tests/unit/modules/test_dpkg_lowpkg.py
+++ b/tests/unit/modules/test_dpkg_lowpkg.py
@@ -125,9 +125,9 @@ class DpkgTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(dpkg.__salt__, {'cmd.run_all': mock}):
self.assertEqual(dpkg.file_dict('httpd'), 'Error: error')
- @patch('salt.modules.dpkg._get_pkg_ds_avail', MagicMock(return_value=dselect_pkg))
- @patch('salt.modules.dpkg._get_pkg_info', MagicMock(return_value=pkgs_info))
- @patch('salt.modules.dpkg._get_pkg_license', MagicMock(return_value='BSD v3'))
+ @patch('salt.modules.dpkg_lowpkg._get_pkg_ds_avail', MagicMock(return_value=dselect_pkg))
+ @patch('salt.modules.dpkg_lowpkg._get_pkg_info', MagicMock(return_value=pkgs_info))
+ @patch('salt.modules.dpkg_lowpkg._get_pkg_license', MagicMock(return_value='BSD v3'))
def test_info(self):
'''
Test info
@@ -152,9 +152,9 @@ class DpkgTestCase(TestCase, LoaderModuleMockMixin):
assert pkg_data['maintainer'] == 'Simpsons Developers <simpsons-devel-discuss(a)lists.springfield.org>'
assert pkg_data['license'] == 'BSD v3'
- @patch('salt.modules.dpkg._get_pkg_ds_avail', MagicMock(return_value=dselect_pkg))
- @patch('salt.modules.dpkg._get_pkg_info', MagicMock(return_value=pkgs_info))
- @patch('salt.modules.dpkg._get_pkg_license', MagicMock(return_value='BSD v3'))
+ @patch('salt.modules.dpkg_lowpkg._get_pkg_ds_avail', MagicMock(return_value=dselect_pkg))
+ @patch('salt.modules.dpkg_lowpkg._get_pkg_info', MagicMock(return_value=pkgs_info))
+ @patch('salt.modules.dpkg_lowpkg._get_pkg_license', MagicMock(return_value='BSD v3'))
def test_info_attr(self):
'''
Test info with 'attr' parameter
--
2.16.4
++++++ fix-zypper.list_pkgs-to-be-aligned-with-pkg-state.patch ++++++
>From 0612549b3acfeb15e0b499b6f469d64062d6ae2d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Mon, 25 Jun 2018 13:06:40 +0100
Subject: [PATCH] Fix zypper.list_pkgs to be aligned with pkg state
Handle packages with multiple version properly with zypper
Add unit test coverage for multiple version packages on Zypper
Fix '_find_remove_targets' after aligning Zypper with pkg state
---
salt/states/pkg.py | 21 ---------------------
1 file changed, 21 deletions(-)
diff --git a/salt/states/pkg.py b/salt/states/pkg.py
index c0fa2f6b69..a13d418400 100644
--- a/salt/states/pkg.py
+++ b/salt/states/pkg.py
@@ -450,16 +450,6 @@ def _find_remove_targets(name=None,
if __grains__['os'] == 'FreeBSD' and origin:
cver = [k for k, v in six.iteritems(cur_pkgs) if v['origin'] == pkgname]
- elif __grains__['os_family'] == 'Suse':
- # On SUSE systems. Zypper returns packages without "arch" in name
- try:
- namepart, archpart = pkgname.rsplit('.', 1)
- except ValueError:
- cver = cur_pkgs.get(pkgname, [])
- else:
- if archpart in salt.utils.pkg.rpm.ARCHES + ("noarch",):
- pkgname = namepart
- cver = cur_pkgs.get(pkgname, [])
else:
cver = cur_pkgs.get(pkgname, [])
@@ -866,17 +856,6 @@ def _verify_install(desired, new_pkgs, ignore_epoch=False, new_caps=None):
cver = new_pkgs.get(pkgname.split('%')[0])
elif __grains__['os_family'] == 'Debian':
cver = new_pkgs.get(pkgname.split('=')[0])
- elif __grains__['os_family'] == 'Suse':
- # On SUSE systems. Zypper returns packages without "arch" in name
- try:
- namepart, archpart = pkgname.rsplit('.', 1)
- except ValueError:
- cver = new_pkgs.get(pkgname)
- else:
- if archpart in salt.utils.pkg.rpm.ARCHES + ("noarch",):
- cver = new_pkgs.get(namepart)
- else:
- cver = new_pkgs.get(pkgname)
else:
cver = new_pkgs.get(pkgname)
if not cver and pkgname in new_caps:
--
2.16.4
++++++ fixed-bug-lvm-has-no-parttion-type.-the-scipt-later-.patch ++++++
>From 40a033cecf44d319039337eb4e4d2f14febe400b Mon Sep 17 00:00:00 2001
From: tyl0re <andreas(a)vogler.name>
Date: Wed, 17 Jul 2019 10:13:09 +0200
Subject: [PATCH] Fixed Bug LVM has no Parttion Type. the Scipt Later
it is checked if fs_type: cmd = ('parted', '-m', '-s', '--', device,
'mkpart', part_type, fs_type, start, end) else: cmd = ('parted', '-m', '-s',
'--', device, 'mkpart', part_type, start, end) But never reached. The Check
was in earlier Versions with parted.py 443 if fs_type and fs_type not in
set(['ext2', 'fat32', 'fat16', 'linux-swap', 'reiserfs', 444 'hfs', 'hfs+',
'hfsx', 'NTFS', 'ufs', 'xfs', 'zfs']):
So the check on not defined fs_type is missing
---
salt/modules/parted_partition.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/salt/modules/parted_partition.py b/salt/modules/parted_partition.py
index 9441fec49fd1833da590b3f65637e8e92b287d1c..7d08a7b315c990e7a87c9c77fd6550a6174b7160 100644
--- a/salt/modules/parted_partition.py
+++ b/salt/modules/parted_partition.py
@@ -515,7 +515,7 @@ def mkpartfs(device, part_type, fs_type, start, end):
'Invalid part_type passed to partition.mkpartfs'
)
- if not _is_fstype(fs_type):
+ if fs_type and not _is_fstype(fs_type):
raise CommandExecutionError(
'Invalid fs_type passed to partition.mkpartfs'
)
--
2.23.0
++++++ fixes-cve-2018-15750-cve-2018-15751.patch ++++++
>From 9ec54e8c1394ab678c6129d98f07c6eafd446399 Mon Sep 17 00:00:00 2001
From: Erik Johnson <palehose(a)gmail.com>
Date: Fri, 24 Aug 2018 10:35:55 -0500
Subject: [PATCH] Fixes: CVE-2018-15750, CVE-2018-15751
Ensure that tokens are hex to avoid hanging/errors in cherrypy
Add empty token salt-api integration tests
Handle Auth exceptions in run_job
Update tornado test to correct authentication message
---
salt/netapi/rest_cherrypy/app.py | 7 -------
tests/integration/netapi/rest_tornado/test_app.py | 4 ++--
2 files changed, 2 insertions(+), 9 deletions(-)
diff --git a/salt/netapi/rest_cherrypy/app.py b/salt/netapi/rest_cherrypy/app.py
index fa1b540e5f..f8b500482b 100644
--- a/salt/netapi/rest_cherrypy/app.py
+++ b/salt/netapi/rest_cherrypy/app.py
@@ -1176,13 +1176,6 @@ class LowDataAdapter(object):
except (TypeError, ValueError):
raise cherrypy.HTTPError(401, 'Invalid token')
- if 'token' in chunk:
- # Make sure that auth token is hex
- try:
- int(chunk['token'], 16)
- except (TypeError, ValueError):
- raise cherrypy.HTTPError(401, 'Invalid token')
-
if client:
chunk['client'] = client
diff --git a/tests/integration/netapi/rest_tornado/test_app.py b/tests/integration/netapi/rest_tornado/test_app.py
index 10ec29f7fa..4102b5645a 100644
--- a/tests/integration/netapi/rest_tornado/test_app.py
+++ b/tests/integration/netapi/rest_tornado/test_app.py
@@ -282,8 +282,8 @@ class TestSaltAPIHandler(_SaltnadoIntegrationTestCase):
self.assertIn('jid', ret[0]) # the first 2 are regular returns
self.assertIn('jid', ret[1])
self.assertIn('Failed to authenticate', ret[2]) # bad auth
- self.assertEqual(ret[0]['minions'], sorted(['minion', 'sub_minion']))
- self.assertEqual(ret[1]['minions'], sorted(['minion', 'sub_minion']))
+ self.assertEqual(ret[0]['minions'], sorted(['minion', 'sub_minion', 'localhost']))
+ self.assertEqual(ret[1]['minions'], sorted(['minion', 'sub_minion', 'localhost']))
def test_simple_local_async_post_no_tgt(self):
low = [{'client': 'local_async',
--
2.16.4
++++++ fixing-streamclosed-issue.patch ++++++
>From 9a5f007a5baa4ba1d28b0e6708bac8b134e4891c Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Mihai=20Dinc=C4=83?= <dincamihai(a)users.noreply.github.com>
Date: Tue, 26 Nov 2019 18:26:31 +0100
Subject: [PATCH] Fixing StreamClosed issue
---
salt/cli/batch_async.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index 754c257b36..c4545e3ebc 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -221,7 +221,6 @@ class BatchAsync(object):
"metadata": self.metadata
}
self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid))
- self.event.remove_event_handler(self.__event_handler)
for (pattern, label) in self.patterns:
if label in ["ping_return", "batch_run"]:
self.event.unsubscribe(pattern, match_type='glob')
@@ -265,6 +264,7 @@ class BatchAsync(object):
def __del__(self):
self.local = None
+ self.event.remove_event_handler(self.__event_handler)
self.event = None
self.ioloop = None
gc.collect()
--
2.16.4
++++++ get-os_arch-also-without-rpm-package-installed.patch ++++++
>From 98f3bd70aaa145b88e8bd4b947b578435e2b1e57 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Wed, 14 Nov 2018 17:36:23 +0100
Subject: [PATCH] Get os_arch also without RPM package installed
backport pkg.rpm test
Add pkg.rpm unit test case
Fix docstring
Add UT for getting OS architecture fallback, when no RPM found (initrd, e.g.)
Add UT for OS architecture detection on fallback, when no CPU arch can be determined
Add UT for OS arch detection when no CPU arch or machine can be determined
Remove unsupported testcase
---
tests/unit/utils/test_pkg.py | 48 ++++++++------------------------------------
1 file changed, 8 insertions(+), 40 deletions(-)
diff --git a/tests/unit/utils/test_pkg.py b/tests/unit/utils/test_pkg.py
index e8b19bef14..361e0bf92f 100644
--- a/tests/unit/utils/test_pkg.py
+++ b/tests/unit/utils/test_pkg.py
@@ -2,51 +2,19 @@
from __future__ import absolute_import, unicode_literals, print_function
-from tests.support.unit import TestCase
-from tests.support.mock import MagicMock, patch
+from tests.support.unit import TestCase, skipIf
+from tests.support.mock import Mock, MagicMock, patch, NO_MOCK, NO_MOCK_REASON
import salt.utils.pkg
from salt.utils.pkg import rpm
-
-class PkgUtilsTestCase(TestCase):
- '''
- TestCase for salt.utils.pkg module
- '''
- test_parameters = [
- ("16.0.0.49153-0+f1", "", "16.0.0.49153-0+f1"),
- ("> 15.0.0", ">", "15.0.0"),
- ("< 15.0.0", "<", "15.0.0"),
- ("<< 15.0.0", "<<", "15.0.0"),
- (">> 15.0.0", ">>", "15.0.0"),
- (">= 15.0.0", ">=", "15.0.0"),
- ("<= 15.0.0", "<=", "15.0.0"),
- ("!= 15.0.0", "!=", "15.0.0"),
- ("<=> 15.0.0", "<=>", "15.0.0"),
- ("<> 15.0.0", "<>", "15.0.0"),
- ("= 15.0.0", "=", "15.0.0"),
- (">15.0.0", ">", "15.0.0"),
- ("<15.0.0", "<", "15.0.0"),
- ("<<15.0.0", "<<", "15.0.0"),
- (">>15.0.0", ">>", "15.0.0"),
- (">=15.0.0", ">=", "15.0.0"),
- ("<=15.0.0", "<=", "15.0.0"),
- ("!=15.0.0", "!=", "15.0.0"),
- ("<=>15.0.0", "<=>", "15.0.0"),
- ("<>15.0.0", "<>", "15.0.0"),
- ("=15.0.0", "=", "15.0.0"),
- ("", "", "")
- ]
-
- def test_split_comparison(self):
- '''
- Tests salt.utils.pkg.split_comparison
- '''
- for test_parameter in self.test_parameters:
- oper, verstr = salt.utils.pkg.split_comparison(test_parameter[0])
- self.assertEqual(test_parameter[1], oper)
- self.assertEqual(test_parameter[2], verstr)
+try:
+ import pytest
+except ImportError:
+ pytest = None
+@skipIf(NO_MOCK, NO_MOCK_REASON)
+@skipIf(pytest is None, 'PyTest is missing')
class PkgRPMTestCase(TestCase):
'''
Test case for pkg.rpm utils
--
2.16.4
++++++ implement-network.fqdns-module-function-bsc-1134860-.patch ++++++
>From a11587a1209cd198f421fafdb43510b6d651f4b2 Mon Sep 17 00:00:00 2001
From: EricS <54029547+ESiebigteroth(a)users.noreply.github.com>
Date: Tue, 3 Sep 2019 11:22:53 +0200
Subject: [PATCH] Implement network.fqdns module function (bsc#1134860)
(#172)
* Duplicate fqdns logic in module.network
* Move _get_interfaces to utils.network
* Reuse network.fqdns in grains.core.fqdns
* Return empty list when fqdns grains is disabled
Co-authored-by: Eric Siebigteroth <eric.siebigteroth(a)suse.de>
---
salt/grains/core.py | 66 +++++-------------------------------------
salt/modules/network.py | 60 ++++++++++++++++++++++++++++++++++++++
salt/utils/network.py | 12 ++++++++
tests/unit/grains/test_core.py | 63 +++++++++++++++++++++++++++++++---------
4 files changed, 130 insertions(+), 71 deletions(-)
diff --git a/salt/grains/core.py b/salt/grains/core.py
index 0f3ccd9b92..77ae99590f 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -26,8 +26,9 @@ from errno import EACCES, EPERM
import datetime
import warnings
import time
+import salt.modules.network
-from multiprocessing.pool import ThreadPool
+from salt.utils.network import _get_interfaces
# pylint: disable=import-error
try:
@@ -84,6 +85,7 @@ __salt__ = {
'cmd.run_all': salt.modules.cmdmod._run_all_quiet,
'smbios.records': salt.modules.smbios.records,
'smbios.get': salt.modules.smbios.get,
+ 'network.fqdns': salt.modules.network.fqdns,
}
log = logging.getLogger(__name__)
@@ -107,7 +109,6 @@ HAS_UNAME = True
if not hasattr(os, 'uname'):
HAS_UNAME = False
-_INTERFACES = {}
# Possible value for h_errno defined in netdb.h
HOST_NOT_FOUND = 1
@@ -1553,17 +1554,6 @@ def _linux_bin_exists(binary):
return False
-def _get_interfaces():
- '''
- Provide a dict of the connected interfaces and their ip addresses
- '''
-
- global _INTERFACES
- if not _INTERFACES:
- _INTERFACES = salt.utils.network.interfaces()
- return _INTERFACES
-
-
def _parse_lsb_release():
ret = {}
try:
@@ -2271,52 +2261,12 @@ def fqdns():
'''
Return all known FQDNs for the system by enumerating all interfaces and
then trying to reverse resolve them (excluding 'lo' interface).
+ To disable the fqdns grain, set enable_fqdns_grains: False in the minion configuration file.
'''
- # Provides:
- # fqdns
-
- grains = {}
- fqdns = set()
-
- def _lookup_fqdn(ip):
- try:
- name, aliaslist, addresslist = socket.gethostbyaddr(ip)
- return [socket.getfqdn(name)] + [als for als in aliaslist if salt.utils.network.is_fqdn(als)]
- except socket.herror as err:
- if err.errno in (0, HOST_NOT_FOUND, NO_DATA):
- # No FQDN for this IP address, so we don't need to know this all the time.
- log.debug("Unable to resolve address %s: %s", ip, err)
- else:
- log.error(err_message, ip, err)
- except (socket.error, socket.gaierror, socket.timeout) as err:
- log.error(err_message, ip, err)
-
- start = time.time()
-
- addresses = salt.utils.network.ip_addrs(include_loopback=False, interface_data=_get_interfaces())
- addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, interface_data=_get_interfaces()))
- err_message = 'Exception during resolving address: %s'
-
- # Create a ThreadPool to process the underlying calls to 'socket.gethostbyaddr' in parallel.
- # This avoid blocking the execution when the "fqdn" is not defined for certains IP addresses, which was causing
- # that "socket.timeout" was reached multiple times secuencially, blocking execution for several seconds.
-
- try:
- pool = ThreadPool(8)
- results = pool.map(_lookup_fqdn, addresses)
- pool.close()
- pool.join()
- except Exception as exc:
- log.error("Exception while creating a ThreadPool for resolving FQDNs: %s", exc)
-
- for item in results:
- if item:
- fqdns.update(item)
-
- elapsed = time.time() - start
- log.debug('Elapsed time getting FQDNs: {} seconds'.format(elapsed))
-
- return {"fqdns": sorted(list(fqdns))}
+ opt = {"fqdns": []}
+ if __opts__.get('enable_fqdns_grains', True) == True:
+ opt = __salt__['network.fqdns']()
+ return opt
def ip_fqdn():
diff --git a/salt/modules/network.py b/salt/modules/network.py
index 38e2bc326e..880f4f8d5f 100644
--- a/salt/modules/network.py
+++ b/salt/modules/network.py
@@ -11,6 +11,10 @@ import logging
import re
import os
import socket
+import time
+
+from multiprocessing.pool import ThreadPool
+
# Import salt libs
import salt.utils.decorators.path
@@ -1887,3 +1891,59 @@ def iphexval(ip):
a = ip.split('.')
hexval = ['%02X' % int(x) for x in a] # pylint: disable=E1321
return ''.join(hexval)
+
+
+def fqdns():
+ '''
+ Return all known FQDNs for the system by enumerating all interfaces and
+ then trying to reverse resolve them (excluding 'lo' interface).
+ '''
+ # Provides:
+ # fqdns
+
+ # Possible value for h_errno defined in netdb.h
+ HOST_NOT_FOUND = 1
+ NO_DATA = 4
+
+ grains = {}
+ fqdns = set()
+
+ def _lookup_fqdn(ip):
+ try:
+ name, aliaslist, addresslist = socket.gethostbyaddr(ip)
+ return [socket.getfqdn(name)] + [als for als in aliaslist if salt.utils.network.is_fqdn(als)]
+ except socket.herror as err:
+ if err.errno in (0, HOST_NOT_FOUND, NO_DATA):
+ # No FQDN for this IP address, so we don't need to know this all the time.
+ log.debug("Unable to resolve address %s: %s", ip, err)
+ else:
+ log.error(err_message, err)
+ except (socket.error, socket.gaierror, socket.timeout) as err:
+ log.error(err_message, err)
+
+ start = time.time()
+
+ addresses = salt.utils.network.ip_addrs(include_loopback=False, interface_data=salt.utils.network._get_interfaces())
+ addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, interface_data=salt.utils.network._get_interfaces()))
+ err_message = 'Exception during resolving address: %s'
+
+ # Create a ThreadPool to process the underlying calls to 'socket.gethostbyaddr' in parallel.
+ # This avoid blocking the execution when the "fqdn" is not defined for certains IP addresses, which was causing
+ # that "socket.timeout" was reached multiple times secuencially, blocking execution for several seconds.
+
+ try:
+ pool = ThreadPool(8)
+ results = pool.map(_lookup_fqdn, addresses)
+ pool.close()
+ pool.join()
+ except Exception as exc:
+ log.error("Exception while creating a ThreadPool for resolving FQDNs: %s", exc)
+
+ for item in results:
+ if item:
+ fqdns.update(item)
+
+ elapsed = time.time() - start
+ log.debug('Elapsed time getting FQDNs: {} seconds'.format(elapsed))
+
+ return {"fqdns": sorted(list(fqdns))}
\ No newline at end of file
diff --git a/salt/utils/network.py b/salt/utils/network.py
index 74536cc143..4cc8a05c4a 100644
--- a/salt/utils/network.py
+++ b/salt/utils/network.py
@@ -50,6 +50,18 @@ except (ImportError, OSError, AttributeError, TypeError):
pass
+_INTERFACES = {}
+def _get_interfaces(): #! function
+ '''
+ Provide a dict of the connected interfaces and their ip addresses
+ '''
+
+ global _INTERFACES
+ if not _INTERFACES:
+ _INTERFACES = interfaces()
+ return _INTERFACES
+
+
def sanitize_host(host):
'''
Sanitize host string.
diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py
index ac03b57226..60914204b0 100644
--- a/tests/unit/grains/test_core.py
+++ b/tests/unit/grains/test_core.py
@@ -35,6 +35,7 @@ import salt.utils.path
import salt.modules.cmdmod
import salt.modules.smbios
import salt.grains.core as core
+import salt.modules.network
# Import 3rd-party libs
from salt.ext import six
@@ -1029,6 +1030,40 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
with patch.object(salt.utils.dns, 'parse_resolv', MagicMock(return_value=resolv_mock)):
assert core.dns() == ret
+
+ def test_enablefqdnsFalse(self):
+ '''
+ tests enable_fqdns_grains is set to False
+ '''
+ with patch.dict('salt.grains.core.__opts__', {'enable_fqdns_grains':False}):
+ assert core.fqdns() == {"fqdns": []}
+
+
+ def test_enablefqdnsTrue(self):
+ '''
+ testing that grains uses network.fqdns module
+ '''
+ with patch.dict('salt.grains.core.__salt__', {'network.fqdns': MagicMock(return_value="my.fake.domain")}):
+ with patch.dict('salt.grains.core.__opts__', {'enable_fqdns_grains':True}):
+ assert core.fqdns() == 'my.fake.domain'
+
+
+ def test_enablefqdnsNone(self):
+ '''
+ testing default fqdns grains is returned when enable_fqdns_grains is None
+ '''
+ with patch.dict('salt.grains.core.__opts__', {'enable_fqdns_grains':None}):
+ assert core.fqdns() == {"fqdns": []}
+
+
+ def test_enablefqdnswithoutpaching(self):
+ '''
+ testing fqdns grains is enabled by default
+ '''
+ with patch.dict('salt.grains.core.__salt__', {'network.fqdns': MagicMock(return_value="my.fake.domain")}):
+ assert core.fqdns() == 'my.fake.domain'
+
+
@skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
@patch('salt.utils.network.ip_addrs', MagicMock(return_value=['1.2.3.4', '5.6.7.8']))
@patch('salt.utils.network.ip_addrs6',
@@ -1044,11 +1079,12 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
('foo.bar.baz', [], ['fe80::a8b2:93ff:fe00:0']),
('bluesniff.foo.bar', [], ['fe80::a8b2:93ff:dead:beef'])]
ret = {'fqdns': ['bluesniff.foo.bar', 'foo.bar.baz', 'rinzler.evil-corp.com']}
- with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock):
- fqdns = core.fqdns()
- assert "fqdns" in fqdns
- assert len(fqdns['fqdns']) == len(ret['fqdns'])
- assert set(fqdns['fqdns']) == set(ret['fqdns'])
+ with patch.dict(core.__salt__, {'network.fqdns': salt.modules.network.fqdns}):
+ with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock):
+ fqdns = core.fqdns()
+ assert "fqdns" in fqdns
+ assert len(fqdns['fqdns']) == len(ret['fqdns'])
+ assert set(fqdns['fqdns']) == set(ret['fqdns'])
@skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
@patch('salt.utils.network.ip_addrs', MagicMock(return_value=['1.2.3.4']))
@@ -1094,14 +1130,15 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
('rinzler.evil-corp.com', ["false-hostname", "badaliass"], ['5.6.7.8']),
('foo.bar.baz', [], ['fe80::a8b2:93ff:fe00:0']),
('bluesniff.foo.bar', ["alias.bluesniff.foo.bar"], ['fe80::a8b2:93ff:dead:beef'])]
- with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock):
- fqdns = core.fqdns()
- assert "fqdns" in fqdns
- for alias in ["this.is.valid.alias", "alias.bluesniff.foo.bar"]:
- assert alias in fqdns["fqdns"]
-
- for alias in ["throwmeaway", "false-hostname", "badaliass"]:
- assert alias not in fqdns["fqdns"]
+ with patch.dict(core.__salt__, {'network.fqdns': salt.modules.network.fqdns}):
+ with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock):
+ fqdns = core.fqdns()
+ assert "fqdns" in fqdns
+ for alias in ["this.is.valid.alias", "alias.bluesniff.foo.bar"]:
+ assert alias in fqdns["fqdns"]
+
+ for alias in ["throwmeaway", "false-hostname", "badaliass"]:
+ assert alias not in fqdns["fqdns"]
def test_core_virtual(self):
'''
--
2.16.4
++++++ improve-batch_async-to-release-consumed-memory-bsc-1.patch ++++++
>From 65e33acaf10fdd838c0cdf34ec93df3a2ed1f0d2 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Thu, 26 Sep 2019 10:41:06 +0100
Subject: [PATCH] Improve batch_async to release consumed memory
(bsc#1140912)
---
salt/cli/batch_async.py | 73 ++++++++++++++++++++++++++++++-------------------
1 file changed, 45 insertions(+), 28 deletions(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index 8a67331102..2bb50459c8 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -5,6 +5,7 @@ Execute a job on the targeted minions by using a moving window of fixed size `ba
# Import python libs
from __future__ import absolute_import, print_function, unicode_literals
+import gc
import tornado
# Import salt libs
@@ -77,6 +78,7 @@ class BatchAsync(object):
self.batch_jid = jid_gen()
self.find_job_jid = jid_gen()
self.find_job_returned = set()
+ self.ended = False
self.event = salt.utils.event.get_event(
'master',
self.opts['sock_dir'],
@@ -86,6 +88,7 @@ class BatchAsync(object):
io_loop=ioloop,
keep_loop=True)
self.scheduled = False
+ self.patterns = {}
def __set_event_handler(self):
ping_return_pattern = 'salt/job/{0}/ret/*'.format(self.ping_jid)
@@ -116,7 +119,7 @@ class BatchAsync(object):
if minion in self.active:
self.active.remove(minion)
self.done_minions.add(minion)
- self.schedule_next()
+ self.event.io_loop.spawn_callback(self.schedule_next)
def _get_next(self):
to_run = self.minions.difference(
@@ -129,23 +132,23 @@ class BatchAsync(object):
)
return set(list(to_run)[:next_batch_size])
- @tornado.gen.coroutine
def check_find_job(self, batch_minions, jid):
- find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid)
- self.event.unsubscribe(find_job_return_pattern, match_type='glob')
- self.patterns.remove((find_job_return_pattern, "find_job_return"))
+ if self.event:
+ find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid)
+ self.event.unsubscribe(find_job_return_pattern, match_type='glob')
+ self.patterns.remove((find_job_return_pattern, "find_job_return"))
- timedout_minions = batch_minions.difference(self.find_job_returned).difference(self.done_minions)
- self.timedout_minions = self.timedout_minions.union(timedout_minions)
- self.active = self.active.difference(self.timedout_minions)
- running = batch_minions.difference(self.done_minions).difference(self.timedout_minions)
+ timedout_minions = batch_minions.difference(self.find_job_returned).difference(self.done_minions)
+ self.timedout_minions = self.timedout_minions.union(timedout_minions)
+ self.active = self.active.difference(self.timedout_minions)
+ running = batch_minions.difference(self.done_minions).difference(self.timedout_minions)
- if timedout_minions:
- self.schedule_next()
+ if timedout_minions:
+ self.schedule_next()
- if running:
- self.find_job_returned = self.find_job_returned.difference(running)
- self.event.io_loop.add_callback(self.find_job, running)
+ if running:
+ self.find_job_returned = self.find_job_returned.difference(running)
+ self.event.io_loop.spawn_callback(self.find_job, running)
@tornado.gen.coroutine
def find_job(self, minions):
@@ -165,8 +168,8 @@ class BatchAsync(object):
gather_job_timeout=self.opts['gather_job_timeout'],
jid=jid,
**self.eauth)
- self.event.io_loop.call_later(
- self.opts['gather_job_timeout'],
+ yield tornado.gen.sleep(self.opts['gather_job_timeout'])
+ self.event.io_loop.spawn_callback(
self.check_find_job,
not_done,
jid)
@@ -174,10 +177,6 @@ class BatchAsync(object):
@tornado.gen.coroutine
def start(self):
self.__set_event_handler()
- #start batching even if not all minions respond to ping
- self.event.io_loop.call_later(
- self.batch_presence_ping_timeout or self.opts['gather_job_timeout'],
- self.start_batch)
ping_return = yield self.local.run_job_async(
self.opts['tgt'],
'test.ping',
@@ -191,6 +190,10 @@ class BatchAsync(object):
metadata=self.metadata,
**self.eauth)
self.targeted_minions = set(ping_return['minions'])
+ #start batching even if not all minions respond to ping
+ yield tornado.gen.sleep(self.batch_presence_ping_timeout or self.opts['gather_job_timeout'])
+ self.event.io_loop.spawn_callback(self.start_batch)
+
@tornado.gen.coroutine
def start_batch(self):
@@ -202,12 +205,14 @@ class BatchAsync(object):
"down_minions": self.targeted_minions.difference(self.minions),
"metadata": self.metadata
}
- self.event.fire_event(data, "salt/batch/{0}/start".format(self.batch_jid))
- yield self.run_next()
+ ret = self.event.fire_event(data, "salt/batch/{0}/start".format(self.batch_jid))
+ self.event.io_loop.spawn_callback(self.run_next)
+ @tornado.gen.coroutine
def end_batch(self):
left = self.minions.symmetric_difference(self.done_minions.union(self.timedout_minions))
- if not left:
+ if not left and not self.ended:
+ self.ended = True
data = {
"available_minions": self.minions,
"down_minions": self.targeted_minions.difference(self.minions),
@@ -220,20 +225,26 @@ class BatchAsync(object):
for (pattern, label) in self.patterns:
if label in ["ping_return", "batch_run"]:
self.event.unsubscribe(pattern, match_type='glob')
+ del self
+ gc.collect()
+ yield
+ @tornado.gen.coroutine
def schedule_next(self):
if not self.scheduled:
self.scheduled = True
# call later so that we maybe gather more returns
- self.event.io_loop.call_later(self.batch_delay, self.run_next)
+ yield tornado.gen.sleep(self.batch_delay)
+ self.event.io_loop.spawn_callback(self.run_next)
@tornado.gen.coroutine
def run_next(self):
+ self.scheduled = False
next_batch = self._get_next()
if next_batch:
self.active = self.active.union(next_batch)
try:
- yield self.local.run_job_async(
+ ret = yield self.local.run_job_async(
next_batch,
self.opts['fun'],
self.opts['arg'],
@@ -244,11 +255,17 @@ class BatchAsync(object):
jid=self.batch_jid,
metadata=self.metadata)
- self.event.io_loop.call_later(self.opts['timeout'], self.find_job, set(next_batch))
+ yield tornado.gen.sleep(self.opts['timeout'])
+ self.event.io_loop.spawn_callback(self.find_job, set(next_batch))
except Exception as ex:
log.error("Error in scheduling next batch: %s", ex)
self.active = self.active.difference(next_batch)
else:
- self.end_batch()
- self.scheduled = False
+ yield self.end_batch()
+ gc.collect()
yield
+
+ def __del__(self):
+ self.event = None
+ self.ioloop = None
+ gc.collect()
--
2.16.4
++++++ include-aliases-in-the-fqdns-grains.patch ++++++
>From 512b189808ea0d7b333587689d7e7eb52d16b189 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Tue, 29 Jan 2019 11:11:38 +0100
Subject: [PATCH] Include aliases in the fqdns grains
Add UT for "is_fqdn"
Add "is_fqdn" check to the network utils
Bugfix: include FQDNs aliases
Deprecate UnitTest assertion in favour of built-in assert keyword
Add UT for fqdns aliases
Leverage cached interfaces, if any.
---
salt/grains/core.py | 14 ++++++--------
salt/utils/network.py | 12 ++++++++++++
tests/unit/grains/test_core.py | 28 +++++++++++++++++++++++++---
tests/unit/utils/test_network.py | 24 ++++++++++++++++++++++++
4 files changed, 67 insertions(+), 11 deletions(-)
diff --git a/salt/grains/core.py b/salt/grains/core.py
index 7b7e328520..309e4c9c4a 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -2275,14 +2275,13 @@ def fqdns():
grains = {}
fqdns = set()
- addresses = salt.utils.network.ip_addrs(include_loopback=False,
- interface_data=_INTERFACES)
- addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False,
- interface_data=_INTERFACES))
- err_message = 'An exception occurred resolving address \'%s\': %s'
+ addresses = salt.utils.network.ip_addrs(include_loopback=False, interface_data=_get_interfaces())
+ addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, interface_data=_get_interfaces()))
+ err_message = 'Exception during resolving address: %s'
for ip in addresses:
try:
- fqdns.add(socket.getfqdn(socket.gethostbyaddr(ip)[0]))
+ name, aliaslist, addresslist = socket.gethostbyaddr(ip)
+ fqdns.update([socket.getfqdn(name)] + [als for als in aliaslist if salt.utils.network.is_fqdn(als)])
except socket.herror as err:
if err.errno in (0, HOST_NOT_FOUND, NO_DATA):
# No FQDN for this IP address, so we don't need to know this all the time.
@@ -2292,8 +2291,7 @@ def fqdns():
except (socket.error, socket.gaierror, socket.timeout) as err:
log.error(err_message, ip, err)
- grains['fqdns'] = sorted(list(fqdns))
- return grains
+ return {"fqdns": sorted(list(fqdns))}
def ip_fqdn():
diff --git a/salt/utils/network.py b/salt/utils/network.py
index 906d1cb3bc..2ae2e213b7 100644
--- a/salt/utils/network.py
+++ b/salt/utils/network.py
@@ -1958,3 +1958,15 @@ def parse_host_port(host_port):
raise ValueError('bad hostname: "{}"'.format(host))
return host, port
+
+
+def is_fqdn(hostname):
+ """
+ Verify if hostname conforms to be a FQDN.
+
+ :param hostname: text string with the name of the host
+ :return: bool, True if hostname is correct FQDN, False otherwise
+ """
+
+ compliant = re.compile(r"(?!-)[A-Z\d\-\_]{1,63}(?<!-)$", re.IGNORECASE)
+ return "." in hostname and len(hostname) < 0xff and all(compliant.match(x) for x in hostname.rstrip(".").split("."))
diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py
index c40595eb3f..ac03b57226 100644
--- a/tests/unit/grains/test_core.py
+++ b/tests/unit/grains/test_core.py
@@ -1046,9 +1046,9 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
ret = {'fqdns': ['bluesniff.foo.bar', 'foo.bar.baz', 'rinzler.evil-corp.com']}
with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock):
fqdns = core.fqdns()
- self.assertIn('fqdns', fqdns)
- self.assertEqual(len(fqdns['fqdns']), len(ret['fqdns']))
- self.assertEqual(set(fqdns['fqdns']), set(ret['fqdns']))
+ assert "fqdns" in fqdns
+ assert len(fqdns['fqdns']) == len(ret['fqdns'])
+ assert set(fqdns['fqdns']) == set(ret['fqdns'])
@skipIf(not salt.utils.platform.is_linux(), 'System is not Linux')
@patch('salt.utils.network.ip_addrs', MagicMock(return_value=['1.2.3.4']))
@@ -1081,6 +1081,28 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin):
mock_log.debug.assert_not_called()
mock_log.error.assert_called_once()
+ @patch.object(salt.utils.platform, 'is_windows', MagicMock(return_value=False))
+ @patch('salt.utils.network.ip_addrs', MagicMock(return_value=['1.2.3.4', '5.6.7.8']))
+ @patch('salt.utils.network.ip_addrs6',
+ MagicMock(return_value=['fe80::a8b2:93ff:fe00:0', 'fe80::a8b2:93ff:dead:beef']))
+ @patch('salt.utils.network.socket.getfqdn', MagicMock(side_effect=lambda v: v)) # Just pass-through
+ def test_fqdns_aliases(self):
+ '''
+ FQDNs aliases
+ '''
+ reverse_resolv_mock = [('foo.bar.baz', ["throwmeaway", "this.is.valid.alias"], ['1.2.3.4']),
+ ('rinzler.evil-corp.com', ["false-hostname", "badaliass"], ['5.6.7.8']),
+ ('foo.bar.baz', [], ['fe80::a8b2:93ff:fe00:0']),
+ ('bluesniff.foo.bar', ["alias.bluesniff.foo.bar"], ['fe80::a8b2:93ff:dead:beef'])]
+ with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock):
+ fqdns = core.fqdns()
+ assert "fqdns" in fqdns
+ for alias in ["this.is.valid.alias", "alias.bluesniff.foo.bar"]:
+ assert alias in fqdns["fqdns"]
+
+ for alias in ["throwmeaway", "false-hostname", "badaliass"]:
+ assert alias not in fqdns["fqdns"]
+
def test_core_virtual(self):
'''
test virtual grain with cmd virt-what
diff --git a/tests/unit/utils/test_network.py b/tests/unit/utils/test_network.py
index 7dcca0166e..74479b0cae 100644
--- a/tests/unit/utils/test_network.py
+++ b/tests/unit/utils/test_network.py
@@ -701,3 +701,27 @@ class NetworkTestCase(TestCase):
# An exception is raised if unicode is passed to socket.getfqdn
minion_id = network.generate_minion_id()
assert minion_id != '', minion_id
+
+ def test_netlink_tool_remote_on(self):
+ with patch('subprocess.check_output', return_value=NETLINK_SS):
+ remotes = network._netlink_tool_remote_on('4505', 'remote')
+ self.assertEqual(remotes, set(['127.0.0.1', '::ffff:1.2.3.4']))
+
+ def test_is_fqdn(self):
+ """
+ Test is_fqdn function passes possible FQDN names.
+
+ :return: None
+ """
+ for fqdn in ["host.domain.com", "something.with.the.dots.still.ok", "UPPERCASE.ALSO.SHOULD.WORK",
+ "MiXeD.CaSe.AcCePtAbLe", "123.host.com", "host123.com", "some_underscore.com", "host-here.com"]:
+ assert network.is_fqdn(fqdn)
+
+ def test_is_not_fqdn(self):
+ """
+ Test is_fqdn function rejects FQDN names.
+
+ :return: None
+ """
+ for fqdn in ["hostname", "/some/path", "$variable.here", "verylonghostname.{}".format("domain" * 45)]:
+ assert not network.is_fqdn(fqdn)
--
2.16.4
++++++ info_installed-works-without-status-attr-now.patch ++++++
>From cedf4ff4dfbc5f8f793aba26808df94e3f7b3d91 Mon Sep 17 00:00:00 2001
From: Jochen Breuer <brejoc(a)gmail.com>
Date: Tue, 19 May 2020 10:34:35 +0200
Subject: [PATCH] info_installed works without status attr now
If 'status' was excluded via attr, info_installed was no longer able to
detect if a package was installed or not. Now info_installed adds the
'status' for the 'lowpkg.info' request again.
---
salt/modules/aptpkg.py | 9 +++++++++
tests/unit/modules/test_aptpkg.py | 17 +++++++++++++++++
2 files changed, 26 insertions(+)
diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py
index 2835d32263..765d69aff2 100644
--- a/salt/modules/aptpkg.py
+++ b/salt/modules/aptpkg.py
@@ -2867,6 +2867,15 @@ def info_installed(*names, **kwargs):
failhard = kwargs.pop('failhard', True)
kwargs.pop('errors', None) # Only for compatibility with RPM
attr = kwargs.pop('attr', None) # Package attributes to return
+
+ # status is needed to see if a package is installed. So we have to add it,
+ # even if it's excluded via attr parameter. Otherwise all packages are
+ # returned.
+ if attr:
+ attr_list = set(attr.split(','))
+ attr_list.add('status')
+ attr = ','.join(attr_list)
+
all_versions = kwargs.pop('all_versions', False) # This is for backward compatible structure only
if kwargs:
diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py
index ba1d874e69..b0193aeaf7 100644
--- a/tests/unit/modules/test_aptpkg.py
+++ b/tests/unit/modules/test_aptpkg.py
@@ -257,6 +257,23 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin):
self.assertEqual(aptpkg.info_installed('wget'), installed)
self.assertEqual(len(aptpkg.info_installed()), 1)
+ def test_info_installed_attr_without_status(self):
+ '''
+ Test info_installed 'attr' for inclusion of 'status' attribute.
+
+ Since info_installed should only return installed packages, we need to
+ call __salt__['lowpkg.info'] with the 'status' attribute even if the user
+ is not asking for it in 'attr'. Otherwise info_installed would not be able
+ to check if the package is installed and would return everything.
+
+ :return:
+ '''
+ with patch('salt.modules.aptpkg.__salt__', {'lowpkg.info': MagicMock(return_value=LOWPKG_INFO)}) as wget_lowpkg:
+ ret = aptpkg.info_installed('wget', attr='version')
+ calls = wget_lowpkg['lowpkg.info'].call_args_list.pop()
+ self.assertIn('status', calls.kwargs['attr'])
+ self.assertIn('version', calls.kwargs['attr'])
+
@patch('salt.modules.aptpkg.__salt__', {'lowpkg.info': MagicMock(return_value=LOWPKG_INFO)})
def test_info_installed_attr(self):
'''
--
2.27.0
++++++ integration-of-msi-authentication-with-azurearm-clou.patch ++++++
>From c750e854c637e405a788f91d5b9a7bd1a0a6edfd Mon Sep 17 00:00:00 2001
From: ed lane <ed.lane.0(a)gmail.com>
Date: Thu, 30 Aug 2018 06:07:08 -0600
Subject: [PATCH] Integration of MSI authentication with azurearm cloud
driver (#105)
---
salt/cloud/clouds/azurearm.py | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/salt/cloud/clouds/azurearm.py b/salt/cloud/clouds/azurearm.py
index 047fdac0a9..2c1fa04ae8 100644
--- a/salt/cloud/clouds/azurearm.py
+++ b/salt/cloud/clouds/azurearm.py
@@ -58,6 +58,9 @@ The Azure ARM cloud module is used to control access to Microsoft Azure Resource
virtual machine type will be "Windows". Only set this parameter on profiles which install Windows operating systems.
+ if using MSI-style authentication:
+ * ``subscription_id``
+
Example ``/etc/salt/cloud.providers`` or
``/etc/salt/cloud.providers.d/azure.conf`` configuration:
@@ -258,7 +261,8 @@ def get_configured_provider():
provider = __is_provider_configured(
__opts__,
__active_provider_name__ or __virtualname__,
- ('subscription_id', 'username', 'password')
+ required_keys=('subscription_id', 'username', 'password'),
+ log_message=False
)
return provider
@@ -301,6 +305,7 @@ def get_conn(client_type):
)
if tenant is not None:
+ # using Service Principle style authentication...
client_id = config.get_cloud_config_value(
'client_id',
get_configured_provider(), __opts__, search_global=False
--
2.16.4
++++++ let-salt-ssh-use-platform-python-binary-in-rhel8-191.patch ++++++
>From 2b5903d2429607a3f46d648520e24c357a56aea6 Mon Sep 17 00:00:00 2001
From: Can Bulut Bayburt <1103552+cbbayburt(a)users.noreply.github.com>
Date: Wed, 4 Dec 2019 15:59:46 +0100
Subject: [PATCH] Let salt-ssh use 'platform-python' binary in RHEL8
(#191)
RHEL/CentOS 8 has an internal Python interpreter called 'platform-python'
included in the base setup.
Add this binary to the list of Python executables to look for when
creating the sh shim.
---
salt/client/ssh/__init__.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/salt/client/ssh/__init__.py b/salt/client/ssh/__init__.py
index 1373274739..d9e91b0f50 100644
--- a/salt/client/ssh/__init__.py
+++ b/salt/client/ssh/__init__.py
@@ -147,7 +147,7 @@ elif [ "$SUDO" ] && [ -n "$SUDO_USER" ]
then SUDO="sudo "
fi
EX_PYTHON_INVALID={EX_THIN_PYTHON_INVALID}
-PYTHON_CMDS="python3 python27 python2.7 python26 python2.6 python2 python"
+PYTHON_CMDS="python3 /usr/libexec/platform-python python27 python2.7 python26 python2.6 python2 python"
for py_cmd in $PYTHON_CMDS
do
if command -v "$py_cmd" >/dev/null 2>&1 && "$py_cmd" -c "import sys; sys.exit(not (sys.version_info >= (2, 6)));"
--
2.16.4
++++++ loader-invalidate-the-import-cachefor-extra-modules.patch ++++++
>From 444e00c6601b878444923f573fdb5f000342be9a Mon Sep 17 00:00:00 2001
From: Alberto Planas <aplanas(a)suse.com>
Date: Thu, 12 Mar 2020 16:39:42 +0100
Subject: [PATCH] loader: invalidate the import cachefor extra modules
Because we are mangling with importlib, we can find from time to
time an invalidation issue with sys.path_importer_cache, that
requires the removal of FileFinder that remain None for the
extra_module_dirs
(cherry picked from commit 0fb8e707a45d5caf40759e8b4943590d6fce5046)
---
salt/loader.py | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/salt/loader.py b/salt/loader.py
index 742b2f8e22..5bd4773645 100644
--- a/salt/loader.py
+++ b/salt/loader.py
@@ -1544,9 +1544,11 @@ class LazyLoader(salt.utils.lazy.LazyDict):
self._clean_module_dirs.append(directory)
def __clean_sys_path(self):
+ invalidate_path_importer_cache = False
for directory in self._clean_module_dirs:
if directory in sys.path:
sys.path.remove(directory)
+ invalidate_path_importer_cache = True
self._clean_module_dirs = []
# Be sure that sys.path_importer_cache do not contains any
@@ -1554,6 +1556,16 @@ class LazyLoader(salt.utils.lazy.LazyDict):
if USE_IMPORTLIB:
importlib.invalidate_caches()
+ # Because we are mangling with importlib, we can find from
+ # time to time an invalidation issue with
+ # sys.path_importer_cache, that requires the removal of
+ # FileFinder that remain None for the extra_module_dirs
+ if invalidate_path_importer_cache:
+ for directory in self.extra_module_dirs:
+ if directory in sys.path_importer_cache \
+ and sys.path_importer_cache[directory] is None:
+ del sys.path_importer_cache[directory]
+
def _load_module(self, name):
mod = None
fpath, suffix = self.file_mapping[name][:2]
--
2.16.4
++++++ loop-fix-variable-names-for-until_no_eval.patch ++++++
>From 2670f83fd1309fbf9fdc98f15f9a6e6a3ecc038d Mon Sep 17 00:00:00 2001
From: Alberto Planas <aplanas(a)suse.com>
Date: Tue, 24 Mar 2020 17:46:23 +0100
Subject: [PATCH] loop: fix variable names for until_no_eval
---
salt/states/loop.py | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/salt/states/loop.py b/salt/states/loop.py
index 726c8c80165803f3b2d98bf7a197013c53f3ebc8..b631e6c8f62416c04b458a595dc31393987eb904 100644
--- a/salt/states/loop.py
+++ b/salt/states/loop.py
@@ -185,10 +185,10 @@ def until_no_eval(
''.format(name, expected))
if ret['comment']:
return ret
- if not m_args:
- m_args = []
- if not m_kwargs:
- m_kwargs = {}
+ if not args:
+ args = []
+ if not kwargs:
+ kwargs = {}
if init_wait:
time.sleep(init_wait)
--
2.23.0
++++++ loosen-azure-sdk-dependencies-in-azurearm-cloud-driv.patch ++++++
>From c9538180f4dd8875ab57dfa3f51ff59608d2481b Mon Sep 17 00:00:00 2001
From: Joachim Gleissner <jgleissner(a)suse.com>
Date: Tue, 18 Sep 2018 15:07:13 +0200
Subject: [PATCH] loosen azure sdk dependencies in azurearm cloud driver
Remove dependency to azure-cli, which is not used at all.
Use azure-storage-sdk as fallback if multiapi version is not available.
remove unused import from azurearm driver
---
salt/cloud/clouds/azurearm.py | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/salt/cloud/clouds/azurearm.py b/salt/cloud/clouds/azurearm.py
index 2c1fa04ae8..d5757c6d28 100644
--- a/salt/cloud/clouds/azurearm.py
+++ b/salt/cloud/clouds/azurearm.py
@@ -104,6 +104,7 @@ import time
# Salt libs
from salt.ext import six
+import pkgutil
import salt.cache
import salt.config as config
import salt.loader
@@ -126,6 +127,11 @@ try:
import azure.mgmt.network.models as network_models
from azure.storage.blob.blockblobservice import BlockBlobService
from msrestazure.azure_exceptions import CloudError
+ if pkgutil.find_loader('azure.multiapi'):
+ # use multiapi version if available
+ from azure.multiapi.storage.v2016_05_31 import CloudStorageAccount
+ else:
+ from azure.storage import CloudStorageAccount
HAS_LIBS = True
except ImportError:
pass
--
2.16.4
++++++ make-aptpkg.list_repos-compatible-on-enabled-disable.patch ++++++
>From 93f69a227b7f8c3d4625c0699ab3923d4a0b3127 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Fri, 16 Nov 2018 10:54:12 +0100
Subject: [PATCH] Make aptpkg.list_repos compatible on enabled/disabled
output
---
salt/modules/aptpkg.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py
index b5503f0b10..8f4d95a195 100644
--- a/salt/modules/aptpkg.py
+++ b/salt/modules/aptpkg.py
@@ -1641,6 +1641,7 @@ def list_repos():
repo['file'] = source.file
repo['comps'] = getattr(source, 'comps', [])
repo['disabled'] = source.disabled
+ repo['enabled'] = not repo['disabled'] # This is for compatibility with the other modules
repo['dist'] = source.dist
repo['type'] = source.type
repo['uri'] = source.uri.rstrip('/')
--
2.16.4
++++++ make-lazyloader.__init__-call-to-_refresh_file_mappi.patch ++++++
>From 767feba147611265f8e1dd31c5104018565e78c9 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Thu, 23 Apr 2020 09:54:53 +0100
Subject: [PATCH] Make LazyLoader.__init__ call to
_refresh_file_mapping thread-safe (bsc#1169604)
---
salt/loader.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/salt/loader.py b/salt/loader.py
index 5bd4773645c77a133701982e19d19739be00a38f..54dadb0b513dbaa4914b0d4b1d343dde709699ad 100644
--- a/salt/loader.py
+++ b/salt/loader.py
@@ -1251,7 +1251,8 @@ class LazyLoader(salt.utils.lazy.LazyDict):
self.suffix_order.append(suffix)
self._lock = threading.RLock()
- self._refresh_file_mapping()
+ with self._lock:
+ self._refresh_file_mapping()
super(LazyLoader, self).__init__() # late init the lazy loader
# create all of the import namespaces
--
2.23.0
++++++ make-profiles-a-package.patch ++++++
>From 2aeefa07ff52048e2db5c8c4ebb1cde6efe87cee Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Mon, 8 Oct 2018 17:52:07 +0200
Subject: [PATCH] Make profiles a package.
Add UTF-8 encoding
Add a docstring
---
salt/cli/support/profiles/__init__.py | 4 ++++
1 file changed, 4 insertions(+)
create mode 100644 salt/cli/support/profiles/__init__.py
diff --git a/salt/cli/support/profiles/__init__.py b/salt/cli/support/profiles/__init__.py
new file mode 100644
index 0000000000..b86aef30b8
--- /dev/null
+++ b/salt/cli/support/profiles/__init__.py
@@ -0,0 +1,4 @@
+# coding=utf-8
+'''
+Profiles for salt-support.
+'''
--
2.16.4
++++++ make-salt.ext.tornado.gen-to-use-salt.ext.backports_.patch ++++++
>From 023d1256106319d042233021c0f200bcdc0cd1f0 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Fri, 13 Mar 2020 13:01:57 +0000
Subject: [PATCH] Make salt.ext.tornado.gen to use salt.ext.backports_abc
on Python 2
---
salt/ext/tornado/gen.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/salt/ext/tornado/gen.py b/salt/ext/tornado/gen.py
index 6cb19730bf1ef3893a4626e9e144eac1c6fa9683..72f422ce28fa43132782a7a0d61b31acd32d138b 100644
--- a/salt/ext/tornado/gen.py
+++ b/salt/ext/tornado/gen.py
@@ -115,13 +115,13 @@ try:
# py35+
from collections.abc import Generator as GeneratorType # type: ignore
except ImportError:
- from backports_abc import Generator as GeneratorType # type: ignore
+ from salt.ext.backports_abc import Generator as GeneratorType # type: ignore
try:
# py35+
from inspect import isawaitable # type: ignore
except ImportError:
- from backports_abc import isawaitable
+ from salt.ext.backports_abc import isawaitable
except ImportError:
if 'APPENGINE_RUNTIME' not in os.environ:
raise
--
2.23.0
++++++ make-setup.py-script-to-not-require-setuptools-9.1.patch ++++++
>From 39b88fd0a3f882e0b33973665bbbacdd60c26a9b Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 25 Mar 2020 13:09:52 +0000
Subject: [PATCH] Make setup.py script to not require setuptools > 9.1
---
setup.py | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/setup.py b/setup.py
index 06374647df5e82a21fc39b08d41c596f0483ff0c..67a915c64ce5d774e8f89ff3502e85b6bc04b82f 100755
--- a/setup.py
+++ b/setup.py
@@ -700,15 +700,6 @@ class Install(install):
install.finalize_options(self)
def run(self):
- from distutils.version import StrictVersion
- if StrictVersion(setuptools.__version__) < StrictVersion('9.1'):
- sys.stderr.write(
- '\n\nInstalling Salt requires setuptools >= 9.1\n'
- 'Available setuptools version is {}\n\n'.format(setuptools.__version__)
- )
- sys.stderr.flush()
- sys.exit(1)
-
# Let's set the running_salt_install attribute so we can add
# _version.py in the build command
self.distribution.running_salt_install = True
--
2.23.0
++++++ move-server_id-deprecation-warning-to-reduce-log-spa.patch ++++++
>From c375d1e25e8b5c77b6a8f89855f17df6e49db9f2 Mon Sep 17 00:00:00 2001
From: Mihai Dinca <mdinca(a)suse.de>
Date: Fri, 14 Jun 2019 15:13:12 +0200
Subject: [PATCH] Move server_id deprecation warning to reduce log
spamming (bsc#1135567) (bsc#1135732)
---
salt/grains/core.py | 4 ----
salt/minion.py | 9 +++++++++
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/salt/grains/core.py b/salt/grains/core.py
index b58c29dbc3..0f3ccd9b92 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -2890,10 +2890,6 @@ def get_server_id():
if bool(use_crc):
id_hash = getattr(zlib, use_crc, zlib.adler32)(__opts__.get('id', '').encode()) & 0xffffffff
else:
- salt.utils.versions.warn_until('Sodium', 'This server_id is computed nor by Adler32 neither by CRC32. '
- 'Please use "server_id_use_crc" option and define algorithm you'
- 'prefer (default "Adler32"). The server_id will be computed with'
- 'Adler32 by default.')
id_hash = _get_hash_by_shell()
server_id = {'server_id': id_hash}
diff --git a/salt/minion.py b/salt/minion.py
index 457f485b0a..4730f68b87 100644
--- a/salt/minion.py
+++ b/salt/minion.py
@@ -97,6 +97,7 @@ from salt.utils.odict import OrderedDict
from salt.utils.process import (default_signals,
SignalHandlingProcess,
ProcessManager)
+from salt.utils.versions import warn_until
from salt.exceptions import (
CommandExecutionError,
CommandNotFoundError,
@@ -1002,6 +1003,14 @@ class MinionManager(MinionBase):
if (self.opts['master_type'] in ('failover', 'distributed')) or not isinstance(self.opts['master'], list):
masters = [masters]
+ if not self.opts.get('server_id_use_crc'):
+ warn_until(
+ 'Sodium',
+ 'This server_id is computed nor by Adler32 neither by CRC32. '
+ 'Please use "server_id_use_crc" option and define algorithm you'
+ 'prefer (default "Adler32"). The server_id will be computed with'
+ 'Adler32 by default.')
+
beacons_leader = True
for master in masters:
s_opts = copy.deepcopy(self.opts)
--
2.16.4
++++++ opensuse-3000-libvirt-engine-fixes-248.patch ++++++
++++ 1335 lines (skipped)
++++++ opensuse-3000-spacewalk-runner-parse-command-247.patch ++++++
>From b8dd66a71d051a55e68778c83b69250c88d4286a Mon Sep 17 00:00:00 2001
From: Alexander Graul <agraul(a)suse.com>
Date: Fri, 3 Jul 2020 14:07:55 +0200
Subject: [PATCH] openSUSE-3000 spacewalk runner parse command (#247)
* Accept nested namespaces in spacewalk.api
salt-run $server spacewalk.api allows users to run arbitrary Spacewalk
API functions through Salt. These are passed in a namespace.method
notation and may use nested namespaces. Previously only methods in a
top-level namespace were supported.
Fixes https://github.com/saltstack/salt/issues/57442
Co-authored-by: Wayne Werner <wwerner(a)saltstack.com>
* Add spacewalk runner command parsing tests
Co-authored-by: Wayne Werner <wwerner(a)saltstack.com>
---
changelog/57442.fixed | 1 +
salt/runners/spacewalk.py | 6 +++-
tests/unit/runners/test_spacewalk.py | 50 ++++++++++++++++++++++++++++
3 files changed, 56 insertions(+), 1 deletion(-)
create mode 100644 changelog/57442.fixed
create mode 100644 tests/unit/runners/test_spacewalk.py
diff --git a/changelog/57442.fixed b/changelog/57442.fixed
new file mode 100644
index 0000000000..81f394880f
--- /dev/null
+++ b/changelog/57442.fixed
@@ -0,0 +1 @@
+Accept nested namespaces in spacewalk.api runner function.
diff --git a/salt/runners/spacewalk.py b/salt/runners/spacewalk.py
index 07ca9bd711..df4e568a28 100644
--- a/salt/runners/spacewalk.py
+++ b/salt/runners/spacewalk.py
@@ -172,7 +172,11 @@ def api(server, command, *args, **kwargs):
log.error(err_msg)
return {call: err_msg}
- namespace, method = command.split('.')
+ namespace, _, method = command.rpartition(".")
+ if not namespace:
+ return {
+ call: "Error: command must use the following format: 'namespace.method'"
+ }
endpoint = getattr(getattr(client, namespace), method)
try:
diff --git a/tests/unit/runners/test_spacewalk.py b/tests/unit/runners/test_spacewalk.py
new file mode 100644
index 0000000000..5b64069cc9
--- /dev/null
+++ b/tests/unit/runners/test_spacewalk.py
@@ -0,0 +1,50 @@
+# -*- coding: utf-8 -*-
+"""
+Unit tests for Spacewalk runner
+"""
+import salt.runners.spacewalk as spacewalk
+from tests.support.mock import Mock, call, patch
+from tests.support.unit import TestCase
+
+
+class SpacewalkTest(TestCase):
+ """Test the Spacewalk runner"""
+
+ def test_api_command_must_have_namespace(self):
+ _get_session_mock = Mock(return_value=(None, None))
+
+ with patch.object(spacewalk, "_get_session", _get_session_mock):
+ result = spacewalk.api("mocked.server", "badMethod")
+ assert result == {
+ "badMethod ()": "Error: command must use the following format: 'namespace.method'"
+ }
+
+ def test_api_command_accepts_single_namespace(self):
+ client_mock = Mock()
+ _get_session_mock = Mock(return_value=(client_mock, "key"))
+ getattr_mock = Mock(return_value="mocked_getattr_return")
+
+ with patch.object(spacewalk, "_get_session", _get_session_mock):
+ with patch.object(spacewalk, "getattr", getattr_mock):
+ spacewalk.api("mocked.server", "system.listSystems")
+ getattr_mock.assert_has_calls(
+ [
+ call(client_mock, "system"),
+ call("mocked_getattr_return", "listSystems"),
+ ]
+ )
+
+ def test_api_command_accepts_nested_namespace(self):
+ client_mock = Mock()
+ _get_session_mock = Mock(return_value=(client_mock, "key"))
+ getattr_mock = Mock(return_value="mocked_getattr_return")
+
+ with patch.object(spacewalk, "_get_session", _get_session_mock):
+ with patch.object(spacewalk, "getattr", getattr_mock):
+ spacewalk.api("mocked.server", "channel.software.listChildren")
+ getattr_mock.assert_has_calls(
+ [
+ call(client_mock, "channel.software"),
+ call("mocked_getattr_return", "listChildren"),
+ ]
+ )
--
2.27.0
++++++ opensuse-3000-virt-defined-states-222.patch ++++++
++++ 2377 lines (skipped)
++++++ opensuse-3000.2-virt-backports-236.patch ++++++
++++ 21124 lines (skipped)
++++++ option-to-en-disable-force-refresh-in-zypper-215.patch ++++++
>From 9c2a45426043531613e089330d5aac8ae6fe6e15 Mon Sep 17 00:00:00 2001
From: darix <darix(a)users.noreply.github.com>
Date: Tue, 12 May 2020 13:58:15 +0200
Subject: [PATCH] Option to en-/disable force refresh in zypper (#215)
The default will still be force refresh to keep existing setups working.
1. Pillar option to turn off force refresh
```
zypper:
refreshdb_force: false
```
2. Cmdline option to force refresh.
```
salt '*' pkg.refresh_db [force=true|false]
```
The cmdline option will override the pillar as well.
Co-authored-by: Alexander Graul <agraul(a)suse.com>
---
salt/modules/zypperpkg.py | 32 ++++++++++++++++++++--------
tests/unit/modules/test_zypperpkg.py | 24 +++++++++++++++++++--
2 files changed, 45 insertions(+), 11 deletions(-)
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index e3f802a911..ed8420f398 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -1279,25 +1279,39 @@ def mod_repo(repo, **kwargs):
return repo
-def refresh_db(root=None):
- '''
- Force a repository refresh by calling ``zypper refresh --force``, return a dict::
+def refresh_db(root=None, force=None):
+ """
+ Trigger a repository refresh by calling ``zypper refresh``. Refresh will run
+ with ``--force`` if the "force=True" flag is passed on the CLI or
+ ``refreshdb_force`` is set to ``true`` in the pillar. The CLI option
+ overrides the pillar setting.
- {'<database name>': Bool}
+ It will return a dict::
- root
- operate on a different root directory.
+ {'<database name>': Bool}
CLI Example:
.. code-block:: bash
- salt '*' pkg.refresh_db
- '''
+ salt '*' pkg.refresh_db [force=true|false]
+
+ Pillar Example:
+
+ .. code-block:: yaml
+
+ zypper:
+ refreshdb_force: false
+ """
# Remove rtag file to keep multiple refreshes from happening in pkg states
salt.utils.pkg.clear_rtag(__opts__)
ret = {}
- out = __zypper__(root=root).refreshable.call('refresh', '--force')
+ refresh_opts = ['refresh']
+ if force is None:
+ force = __pillar__.get('zypper', {}).get('refreshdb_force', True)
+ if force:
+ refresh_opts.append('--force')
+ out = __zypper__(root=root).refreshable.call(*refresh_opts)
for line in out.splitlines():
if not line:
diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py
index 2a8e753b9d..9a5c59a857 100644
--- a/tests/unit/modules/test_zypperpkg.py
+++ b/tests/unit/modules/test_zypperpkg.py
@@ -278,12 +278,32 @@ class ZypperTestCase(TestCase, LoaderModuleMockMixin):
'stderr': '', 'stdout': '\n'.join(ref_out), 'retcode': 0
}
- with patch.dict(zypper.__salt__, {'cmd.run_all': MagicMock(return_value=run_out)}):
- with patch.object(salt.utils.pkg, 'clear_rtag', Mock()):
+ zypper_mock = MagicMock(return_value=run_out)
+ call_kwargs = {
+ "output_loglevel": "trace",
+ "python_shell": False,
+ "env": {}
+ }
+ with patch.dict(zypper.__salt__, {"cmd.run_all": zypper_mock}):
+ with patch.object(salt.utils.pkg, "clear_rtag", Mock()):
result = zypper.refresh_db()
self.assertEqual(result.get("openSUSE-Leap-42.1-LATEST"), False)
self.assertEqual(result.get("openSUSE-Leap-42.1-Update"), False)
self.assertEqual(result.get("openSUSE-Leap-42.1-Update-Non-Oss"), True)
+ zypper_mock.assert_called_with(
+ ["zypper", "--non-interactive", "refresh", "--force"],
+ **call_kwargs
+ )
+ zypper.refresh_db(force=False)
+ zypper_mock.assert_called_with(
+ ["zypper", "--non-interactive", "refresh"],
+ **call_kwargs
+ )
+ zypper.refresh_db(force=True)
+ zypper_mock.assert_called_with(
+ ["zypper", "--non-interactive", "refresh", "--force"],
+ **call_kwargs
+ )
def test_info_installed(self):
'''
--
2.26.2
++++++ prevent-ansiblegate-unit-tests-to-fail-on-ubuntu.patch ++++++
>From 73afbe5fe00c47427a032f8d94c113e1375e32ea Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Mon, 8 Jul 2019 14:46:10 +0100
Subject: [PATCH] Prevent ansiblegate unit tests to fail on Ubuntu
---
tests/unit/modules/test_ansiblegate.py | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/tests/unit/modules/test_ansiblegate.py b/tests/unit/modules/test_ansiblegate.py
index b7b43efda4..05dff4a4fa 100644
--- a/tests/unit/modules/test_ansiblegate.py
+++ b/tests/unit/modules/test_ansiblegate.py
@@ -169,9 +169,11 @@ description:
with patch('salt.utils.timed_subprocess.TimedProc', proc):
ret = _ansible_module_caller.call("one.two.three", "arg_1", kwarg1="foobar")
if six.PY3:
- proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"kwarg1": "foobar", "_raw_params": "arg_1"}}'], stdout=-1, timeout=1200)
proc.assert_any_call(['python3', 'foofile'], stdin=ANSIBLE_MODULE_ARGS, stdout=-1, timeout=1200)
else:
- proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"_raw_params": "arg_1", "kwarg1": "foobar"}}'], stdout=-1, timeout=1200)
proc.assert_any_call(['python', 'foofile'], stdin=ANSIBLE_MODULE_ARGS, stdout=-1, timeout=1200)
+ try:
+ proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"kwarg1": "foobar", "_raw_params": "arg_1"}}'], stdout=-1, timeout=1200)
+ except AssertionError:
+ proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"_raw_params": "arg_1", "kwarg1": "foobar"}}'], stdout=-1, timeout=1200)
assert ret == {"completed": True, "timeout": 1200}
--
2.16.4
++++++ prevent-logging-deadlock-on-salt-api-subprocesses-bs.patch ++++++
>From 76f450aa8123ca3ad7f7a0205e234d1dec4ad425 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 22 Jan 2020 08:19:55 +0000
Subject: [PATCH] Prevent logging deadlock on salt-api subprocesses
(bsc#1159284)
---
salt/_logging/impl.py | 60 ++++++++++++++++++++---------------
salt/client/ssh/__init__.py | 16 +++++++---
salt/client/ssh/client.py | 9 +++++-
salt/client/ssh/wrapper/cp.py | 2 +-
salt/loader.py | 2 +-
salt/utils/lazy.py | 5 ++-
6 files changed, 61 insertions(+), 33 deletions(-)
diff --git a/salt/_logging/impl.py b/salt/_logging/impl.py
index 347259bcf506705e2ea1a24da030a7132eb8a527..fdfabf6d3b16619350107bd01f2c3a606fb93262 100644
--- a/salt/_logging/impl.py
+++ b/salt/_logging/impl.py
@@ -19,6 +19,7 @@ PROFILE = logging.PROFILE = 15
TRACE = logging.TRACE = 5
GARBAGE = logging.GARBAGE = 1
QUIET = logging.QUIET = 1000
+DEBUG = logging.DEBUG = 10
# Import Salt libs
from salt._logging.handlers import StreamHandler
@@ -187,11 +188,11 @@ class SaltLoggingClass(six.with_metaclass(LoggingMixinMeta, LOGGING_LOGGER_CLASS
'''
instance = super(SaltLoggingClass, cls).__new__(cls)
- try:
- max_logger_length = len(max(
- list(logging.Logger.manager.loggerDict), key=len
- ))
- for handler in logging.root.handlers:
+ max_logger_length = len(max(
+ list(logging.Logger.manager.loggerDict), key=len
+ ))
+ for handler in logging.root.handlers:
+ try:
if handler in (LOGGING_NULL_HANDLER,
LOGGING_STORE_HANDLER,
LOGGING_TEMP_HANDLER):
@@ -210,18 +211,15 @@ class SaltLoggingClass(six.with_metaclass(LoggingMixinMeta, LOGGING_LOGGER_CLASS
match = MODNAME_PATTERN.search(fmt)
if not match:
# Not matched. Release handler and return.
- handler.release()
return instance
if 'digits' not in match.groupdict():
# No digits group. Release handler and return.
- handler.release()
return instance
digits = match.group('digits')
if not digits or not (digits and digits.isdigit()):
# No valid digits. Release handler and return.
- handler.release()
return instance
if int(digits) < max_logger_length:
@@ -233,9 +231,14 @@ class SaltLoggingClass(six.with_metaclass(LoggingMixinMeta, LOGGING_LOGGER_CLASS
)
handler.setFormatter(formatter)
handler.release()
- except ValueError:
- # There are no registered loggers yet
- pass
+ except ValueError:
+ # There are no registered loggers yet
+ pass
+ finally:
+ try:
+ handler.release()
+ except:
+ pass
return instance
def _log(self, level, msg, args, exc_info=None,
@@ -278,20 +281,26 @@ class SaltLoggingClass(six.with_metaclass(LoggingMixinMeta, LOGGING_LOGGER_CLASS
else:
extra['exc_info_on_loglevel'] = exc_info_on_loglevel
- if sys.version_info < (3,):
- LOGGING_LOGGER_CLASS._log(
- self, level, msg, args, exc_info=exc_info, extra=extra
- )
- elif sys.version_info < (3, 8):
- LOGGING_LOGGER_CLASS._log(
- self, level, msg, args, exc_info=exc_info, extra=extra,
- stack_info=stack_info
- )
- else:
- LOGGING_LOGGER_CLASS._log(
- self, level, msg, args, exc_info=exc_info, extra=extra,
- stack_info=stack_info, stacklevel=stacklevel
- )
+ try:
+ logging._acquireLock()
+ if sys.version_info < (3,):
+ LOGGING_LOGGER_CLASS._log(
+ self, level, msg, args, exc_info=exc_info, extra=extra
+ )
+ elif sys.version_info < (3, 8):
+ LOGGING_LOGGER_CLASS._log(
+ self, level, msg, args, exc_info=exc_info, extra=extra,
+ stack_info=stack_info
+ )
+ else:
+ LOGGING_LOGGER_CLASS._log(
+ self, level, msg, args, exc_info=exc_info, extra=extra,
+ stack_info=stack_info, stacklevel=stacklevel
+ )
+ except:
+ pass
+ finally:
+ logging._releaseLock()
def makeRecord(self, name, level, fn, lno, msg, args, exc_info,
func=None, extra=None, sinfo=None):
@@ -393,6 +402,7 @@ if logging.getLoggerClass() is not SaltLoggingClass:
logging.addLevelName(PROFILE, 'PROFILE')
logging.addLevelName(TRACE, 'TRACE')
logging.addLevelName(GARBAGE, 'GARBAGE')
+ logging.addLevelName(DEBUG, 'DEBUG')
# ----- REMOVE ON REFACTORING COMPLETE -------------------------------------------------------------------------->
if not logging.root.handlers:
diff --git a/salt/client/ssh/__init__.py b/salt/client/ssh/__init__.py
index d9e91b0f50bfaa76d519fcaa4bdc868bce80f554..e8aad093e0f6df32faa16a838f1db2c6746e1b8e 100644
--- a/salt/client/ssh/__init__.py
+++ b/salt/client/ssh/__init__.py
@@ -520,7 +520,9 @@ class SSH(object):
mine=mine,
**target)
ret = {'id': single.id}
+ logging._acquireLock()
stdout, stderr, retcode = single.run()
+ logging._releaseLock()
# This job is done, yield
try:
data = salt.utils.json.find_json(stdout)
@@ -586,10 +588,16 @@ class SSH(object):
self.targets[host],
mine,
)
- routine = Process(
- target=self.handle_routine,
- args=args)
- routine.start()
+ try:
+ logging._acquireLock()
+ routine = Process(
+ target=self.handle_routine,
+ args=args)
+ routine.start()
+ except:
+ pass
+ finally:
+ logging._releaseLock()
running[host] = {'thread': routine}
continue
ret = {}
diff --git a/salt/client/ssh/client.py b/salt/client/ssh/client.py
index e8e634ca12d85f1e1a9e047f43eac8c041cc5666..d4a89cf4fbbde5282597dc6b82c66dde4288edf1 100644
--- a/salt/client/ssh/client.py
+++ b/salt/client/ssh/client.py
@@ -6,6 +6,8 @@ import os
import copy
import logging
import random
+import time
+import multiprocessing
# Import Salt libs
import salt.config
@@ -15,6 +17,7 @@ from salt.exceptions import SaltClientError # Temporary
log = logging.getLogger(__name__)
+_LOCK = multiprocessing.Lock()
class SSHClient(object):
'''
@@ -61,7 +64,11 @@ class SSHClient(object):
opts['selected_target_option'] = tgt_type
opts['tgt'] = tgt
opts['arg'] = arg
- return salt.client.ssh.SSH(opts)
+ _LOCK.acquire()
+ ret = salt.client.ssh.SSH(opts)
+ time.sleep(0.01)
+ _LOCK.release()
+ return ret
def cmd_iter(
self,
diff --git a/salt/client/ssh/wrapper/cp.py b/salt/client/ssh/wrapper/cp.py
index 894e62f94c87ae5b68c1f82fc3e80ec8f25ac118..9bf0c150a071b4bfa780fe0293e3d8e93ab8e6ef 100644
--- a/salt/client/ssh/wrapper/cp.py
+++ b/salt/client/ssh/wrapper/cp.py
@@ -4,7 +4,7 @@ Wrap the cp module allowing for managed ssh file transfers
'''
# Import Python libs
from __future__ import absolute_import, print_function
-import logging
+import salt.log.setup as logging
import os
# Import salt libs
diff --git a/salt/loader.py b/salt/loader.py
index 54dadb0b513dbaa4914b0d4b1d343dde709699ad..b824a70a0cc40128f3271f70f676f1551194236c 100644
--- a/salt/loader.py
+++ b/salt/loader.py
@@ -11,7 +11,7 @@ import os
import re
import sys
import time
-import logging
+import salt.log.setup as logging
import inspect
import tempfile
import functools
diff --git a/salt/utils/lazy.py b/salt/utils/lazy.py
index 3cd6489d2d8c50ec4e6eb70c50407f1084db377b..bb4b38e1a3cfa05945cd438fc9d30e7c47c3391b 100644
--- a/salt/utils/lazy.py
+++ b/salt/utils/lazy.py
@@ -5,7 +5,8 @@ Lazily-evaluated data structures, primarily used by Salt's loader
# Import Python Libs
from __future__ import absolute_import, unicode_literals
-import logging
+import salt.log.setup as logging
+import time
import salt.exceptions
try:
@@ -102,9 +103,11 @@ class LazyDict(MutableMapping):
# load the item
if self._load(key):
log.debug('LazyLoaded %s', key)
+ time.sleep(0.0001)
return self._dict[key]
else:
log.debug('Could not LazyLoad %s: %s', key, self.missing_fun_string(key))
+ time.sleep(0.0001)
raise KeyError(key)
else:
return self._dict[key]
--
2.23.0
++++++ prevent-systemd-run-description-issue-when-running-a.patch ++++++
>From 29316e1e73972d7c30a7b125a27198fefc6b2fd7 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Mon, 30 Sep 2019 12:06:08 +0100
Subject: [PATCH] Prevent systemd-run description issue when running
aptpkg (bsc#1152366)
---
salt/modules/aptpkg.py | 2 +-
tests/unit/modules/test_aptpkg.py | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py
index bafad40efe..2835d32263 100644
--- a/salt/modules/aptpkg.py
+++ b/salt/modules/aptpkg.py
@@ -168,7 +168,7 @@ def _call_apt(args, scope=True, **kwargs):
'''
cmd = []
if scope and salt.utils.systemd.has_scope(__context__) and __salt__['config.get']('systemd.scope', True):
- cmd.extend(['systemd-run', '--scope', '--description "{0}"'.format(__name__)])
+ cmd.extend(['systemd-run', '--scope', '--description', '"{0}"'.format(__name__)])
cmd.extend(args)
params = {'output_loglevel': 'trace',
diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py
index 88eed062c4..2224aba9a1 100644
--- a/tests/unit/modules/test_aptpkg.py
+++ b/tests/unit/modules/test_aptpkg.py
@@ -645,7 +645,7 @@ class AptUtilsTestCase(TestCase, LoaderModuleMockMixin):
with patch.dict(aptpkg.__salt__, {'cmd.run_all': MagicMock(), 'config.get': MagicMock(return_value=True)}):
aptpkg._call_apt(['apt-get', 'purge', 'vim']) # pylint: disable=W0106
aptpkg.__salt__['cmd.run_all'].assert_called_once_with(
- ['systemd-run', '--scope', '--description "salt.modules.aptpkg"', 'apt-get', 'purge', 'vim'], env={},
+ ['systemd-run', '--scope', '--description', '"salt.modules.aptpkg"', 'apt-get', 'purge', 'vim'], env={},
output_loglevel='trace', python_shell=False)
def test_call_apt_with_kwargs(self):
--
2.16.4
++++++ prevent-test_mod_del_repo_multiline_values-to-fail.patch ++++++
>From c820b9e652474b4866fe099a709b52fe3b715ce9 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 27 Nov 2019 15:41:57 +0000
Subject: [PATCH] Prevent test_mod_del_repo_multiline_values to fail
---
tests/integration/modules/test_pkg.py | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/tests/integration/modules/test_pkg.py b/tests/integration/modules/test_pkg.py
index 61748f9477..6f3767bfbd 100644
--- a/tests/integration/modules/test_pkg.py
+++ b/tests/integration/modules/test_pkg.py
@@ -167,17 +167,24 @@ class PkgModuleTest(ModuleCase, SaltReturnAssertsMixin):
enabled=enabled,
failovermethod=failovermethod,
)
- # return data from pkg.mod_repo contains the file modified at
- # the top level, so use next(iter(ret)) to get that key
self.assertNotEqual(ret, {})
- repo_info = ret[next(iter(ret))]
+ repo_info = {repo: ret}
self.assertIn(repo, repo_info)
- self.assertEqual(repo_info[repo]['baseurl'], my_baseurl)
+ if os_grain == 'SUSE':
+ self.assertEqual(repo_info[repo]['baseurl'], expected_get_repo_baseurl_zypp)
+ else:
+ self.assertEqual(repo_info[repo]['baseurl'], my_baseurl)
ret = self.run_function('pkg.get_repo', [repo])
- self.assertEqual(ret['baseurl'], expected_get_repo_baseurl)
+ if os_grain == 'SUSE':
+ self.assertEqual(repo_info[repo]['baseurl'], expected_get_repo_baseurl_zypp)
+ else:
+ self.assertEqual(ret['baseurl'], expected_get_repo_baseurl)
self.run_function('pkg.mod_repo', [repo])
ret = self.run_function('pkg.get_repo', [repo])
- self.assertEqual(ret['baseurl'], expected_get_repo_baseurl)
+ if os_grain == 'SUSE':
+ self.assertEqual(repo_info[repo]['baseurl'], expected_get_repo_baseurl_zypp)
+ else:
+ self.assertEqual(ret['baseurl'], expected_get_repo_baseurl)
finally:
if repo is not None:
self.run_function('pkg.del_repo', [repo])
@@ -191,6 +198,7 @@ class PkgModuleTest(ModuleCase, SaltReturnAssertsMixin):
try:
if os_grain in ['CentOS', 'RedHat', 'SUSE']:
my_baseurl = 'http://my.fake.repo/foo/bar/\n http://my.fake.repo.alt/foo/bar/'
+ expected_get_repo_baseurl_zypp = 'http://my.fake.repo/foo/bar/%0A%20http://my.fake.repo.alt/foo/bar/'
expected_get_repo_baseurl = 'http://my.fake.repo/foo/bar/\nhttp://my.fake.repo.alt/foo/bar/'
major_release = int(
self.run_function(
--
2.16.4
++++++ provide-the-missing-features-required-for-yomi-yet-o.patch ++++++
++++ 7074 lines (skipped)
++++++ python3.8-compatibility-pr-s-235.patch ++++++
++++ 1995 lines (skipped)
++++++ re-adding-function-to-test-for-root.patch ++++++
>From a6792f951f8090d8326de049eb48bb4a11291e06 Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Fri, 20 Mar 2020 13:58:54 +0100
Subject: [PATCH] Re-adding function to test for root
---
tests/unit/modules/test_rpm_lowpkg.py | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/tests/unit/modules/test_rpm_lowpkg.py b/tests/unit/modules/test_rpm_lowpkg.py
index 54b81f6972..b6cbd9e5cb 100644
--- a/tests/unit/modules/test_rpm_lowpkg.py
+++ b/tests/unit/modules/test_rpm_lowpkg.py
@@ -18,6 +18,11 @@ from tests.support.mock import (
import salt.modules.rpm_lowpkg as rpm
+def _called_with_root(mock):
+ cmd = ' '.join(mock.call_args[0][0])
+ return cmd.startswith('rpm --root /')
+
+
class RpmTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.modules.rpm
--
2.16.4
++++++ read-repo-info-without-using-interpolation-bsc-11356.patch ++++++
>From b502d73be38aeb509a6c5324cdc9bb94d7220c0a Mon Sep 17 00:00:00 2001
From: Mihai Dinca <mdinca(a)suse.de>
Date: Thu, 7 Nov 2019 15:11:49 +0100
Subject: [PATCH] Read repo info without using interpolation
(bsc#1135656)
---
salt/modules/zypperpkg.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index 5f3b6d6855..0c15214e5e 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -1045,7 +1045,7 @@ def _get_repo_info(alias, repos_cfg=None, root=None):
Get one repo meta-data.
'''
try:
- meta = dict((repos_cfg or _get_configured_repos(root=root)).items(alias))
+ meta = dict((repos_cfg or _get_configured_repos(root=root)).items(alias, raw=True))
meta['alias'] = alias
for key, val in six.iteritems(meta):
if val in ['0', '1']:
--
2.16.4
++++++ reintroducing-reverted-changes.patch ++++++
>From da91692b5a6cc0b895fa2a1a3a6d0c21d9913ebf Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Wed, 25 Mar 2020 15:18:51 +0100
Subject: [PATCH] Reintroducing reverted changes
Reintroducing changes from commit e20362f6f053eaa4144583604e6aac3d62838419
that got partially reverted by this commit:
https://github.com/openSUSE/salt/commit/d0ef24d113bdaaa29f180031b5da384cffe…
---
tests/unit/modules/test_aptpkg.py | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py
index 2224aba9a1..ba1d874e69 100644
--- a/tests/unit/modules/test_aptpkg.py
+++ b/tests/unit/modules/test_aptpkg.py
@@ -253,7 +253,9 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin):
if installed['wget'].get(names[name], False):
installed['wget'][name] = installed['wget'].pop(names[name])
- assert aptpkg.info_installed('wget') == installed
+ del installed['wget']['status']
+ self.assertEqual(aptpkg.info_installed('wget'), installed)
+ self.assertEqual(len(aptpkg.info_installed()), 1)
@patch('salt.modules.aptpkg.__salt__', {'lowpkg.info': MagicMock(return_value=LOWPKG_INFO)})
def test_info_installed_attr(self):
--
2.16.4
++++++ remove-arch-from-name-when-pkg.list_pkgs-is-called-w.patch ++++++
>From dcaf5a98cfb4e4fd874dd0ec17630d8b7650f5f9 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Mon, 19 Nov 2018 11:46:26 +0000
Subject: [PATCH] Remove arch from name when pkg.list_pkgs is called with
'attr' (bsc#1114029)
Add unit tests for pkg_resource.format_pkg_list
Fix pylint issues
Refactor: Return requested attr even if empty
Add corner cases on package names to unit tests
Fix Zypper/Yum unit test after returning empty requested attrs
Add Yum/Zypper list_pkgs unit tests for multiple versions reported
Compare testing items properly to avoid unwanted failures
Use assertCountEqual when running on Python3
Add missing import for the six module
Strip architecture from package name in aptpkg module
Use parse_arch_from_name if available on the virtual pkg module
Adapt unit tests after introducing parse_arch_from_name
Use PKG_ARCH_SEPARATOR in pkg.normalize_name method
Add pkg_resource to setup loader modules. Fix pylint
Remove unnecessary lambda
Return None instead empty string for arch and release in pkg.list_pkgs
---
salt/modules/aptpkg.py | 4 +--
salt/modules/pkg_resource.py | 13 ++++-----
salt/modules/yumpkg.py | 4 +--
salt/modules/zypperpkg.py | 4 +--
tests/unit/modules/test_pkg_resource.py | 2 +-
tests/unit/modules/test_yumpkg.py | 51 ++-------------------------------
tests/unit/modules/test_zypperpkg.py | 4 +--
7 files changed, 18 insertions(+), 64 deletions(-)
diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py
index 3b0d8423db..345b8422d9 100644
--- a/salt/modules/aptpkg.py
+++ b/salt/modules/aptpkg.py
@@ -206,7 +206,7 @@ def normalize_name(name):
return name
-def parse_arch(name):
+def parse_arch_from_name(name):
'''
Parse name and architecture from the specified package name.
@@ -214,7 +214,7 @@ def parse_arch(name):
.. code-block:: bash
- salt '*' pkg.parse_arch zsh:amd64
+ salt '*' pkg.parse_arch_from_name zsh:amd64
'''
try:
_name, _arch = name.rsplit(PKG_ARCH_SEPARATOR, 1)
diff --git a/salt/modules/pkg_resource.py b/salt/modules/pkg_resource.py
index 8fa3a074fa..0c872f1805 100644
--- a/salt/modules/pkg_resource.py
+++ b/salt/modules/pkg_resource.py
@@ -312,18 +312,17 @@ def format_pkg_list(packages, versions_as_list, attr):
ret = copy.deepcopy(packages)
if attr:
ret_attr = {}
- requested_attr = {'epoch', 'version', 'release', 'arch', 'install_date', 'install_date_time_t'}
+ requested_attr = set(['epoch', 'version', 'release', 'arch',
+ 'install_date', 'install_date_time_t'])
if attr != 'all':
requested_attr &= set(attr + ['version'] + ['arch'])
for name in ret:
- if 'pkg.parse_arch' in __salt__:
- _parse_arch = __salt__['pkg.parse_arch'](name)
- else:
- _parse_arch = {'name': name, 'arch': None}
- _name = _parse_arch['name']
- _arch = _parse_arch['arch']
+ _parse_arch_from_name = __salt__.get('pkg.parse_arch_from_name', lambda pkgname: {'name': pkgname, 'arch': None})
+ name_arch_d = _parse_arch_from_name(name)
+ _name = name_arch_d['name']
+ _arch = name_arch_d['arch']
versions = []
pkgname = None
diff --git a/salt/modules/yumpkg.py b/salt/modules/yumpkg.py
index c89d321a1b..b1257d0de0 100644
--- a/salt/modules/yumpkg.py
+++ b/salt/modules/yumpkg.py
@@ -442,7 +442,7 @@ def normalize_name(name):
return name
-def parse_arch(name):
+def parse_arch_from_name(name):
'''
Parse name and architecture from the specified package name.
@@ -450,7 +450,7 @@ def parse_arch(name):
.. code-block:: bash
- salt '*' pkg.parse_arch zsh.x86_64
+ salt '*' pkg.parse_arch_from_name zsh.x86_64
'''
_name, _arch = None, None
try:
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index 08a9c2ed4d..04a6a6872d 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -593,7 +593,7 @@ def info_available(*names, **kwargs):
return ret
-def parse_arch(name):
+def parse_arch_from_name(name):
'''
Parse name and architecture from the specified package name.
@@ -601,7 +601,7 @@ def parse_arch(name):
.. code-block:: bash
- salt '*' pkg.parse_arch zsh.x86_64
+ salt '*' pkg.parse_arch_from_name zsh.x86_64
'''
_name, _arch = None, None
try:
diff --git a/tests/unit/modules/test_pkg_resource.py b/tests/unit/modules/test_pkg_resource.py
index 6bb647082c..d5ccb2a7a2 100644
--- a/tests/unit/modules/test_pkg_resource.py
+++ b/tests/unit/modules/test_pkg_resource.py
@@ -236,7 +236,7 @@ class PkgresTestCase(TestCase, LoaderModuleMockMixin):
}
]
}
- with patch.dict(pkg_resource.__salt__, {'pkg.parse_arch': NAME_ARCH_MAPPING.get}):
+ with patch.dict(pkg_resource.__salt__, {'pkg.parse_arch_from_name': NAME_ARCH_MAPPING.get}):
if six.PY3:
self.assertCountEqual(pkg_resource.format_pkg_list(packages, False, attr=['epoch', 'release']), expected_pkg_list)
else:
diff --git a/tests/unit/modules/test_yumpkg.py b/tests/unit/modules/test_yumpkg.py
index 5e652b7e53..9fbe3d051e 100644
--- a/tests/unit/modules/test_yumpkg.py
+++ b/tests/unit/modules/test_yumpkg.py
@@ -107,7 +107,7 @@ class YumTestCase(TestCase, LoaderModuleMockMixin):
patch.dict(yumpkg.__salt__, {'pkg_resource.add_pkg': _add_data}), \
patch.dict(yumpkg.__salt__, {'pkg_resource.format_pkg_list': pkg_resource.format_pkg_list}), \
patch.dict(yumpkg.__salt__, {'pkg_resource.stringify': MagicMock()}), \
- patch.dict(pkg_resource.__salt__, {'pkg.parse_arch': yumpkg.parse_arch}):
+ patch.dict(pkg_resource.__salt__, {'pkg.parse_arch_from_name': yumpkg.parse_arch_from_name}):
pkgs = yumpkg.list_pkgs(versions_as_list=True)
for pkg_name, pkg_version in {
'python-urlgrabber': '3.10-8.el7',
@@ -155,7 +155,7 @@ class YumTestCase(TestCase, LoaderModuleMockMixin):
patch.dict(yumpkg.__salt__, {'pkg_resource.add_pkg': _add_data}), \
patch.dict(yumpkg.__salt__, {'pkg_resource.format_pkg_list': pkg_resource.format_pkg_list}), \
patch.dict(yumpkg.__salt__, {'pkg_resource.stringify': MagicMock()}), \
- patch.dict(pkg_resource.__salt__, {'pkg.parse_arch': yumpkg.parse_arch}):
+ patch.dict(pkg_resource.__salt__, {'pkg.parse_arch_from_name': yumpkg.parse_arch_from_name}):
pkgs = yumpkg.list_pkgs(attr=['epoch', 'release', 'arch', 'install_date_time_t'])
for pkg_name, pkg_attr in {
'python-urlgrabber': {
@@ -273,7 +273,7 @@ class YumTestCase(TestCase, LoaderModuleMockMixin):
patch.dict(yumpkg.__salt__, {'pkg_resource.add_pkg': _add_data}), \
patch.dict(yumpkg.__salt__, {'pkg_resource.format_pkg_list': pkg_resource.format_pkg_list}), \
patch.dict(yumpkg.__salt__, {'pkg_resource.stringify': MagicMock()}), \
- patch.dict(pkg_resource.__salt__, {'pkg.parse_arch': yumpkg.parse_arch}):
+ patch.dict(pkg_resource.__salt__, {'pkg.parse_arch_from_name': yumpkg.parse_arch_from_name}):
pkgs = yumpkg.list_pkgs(attr=['epoch', 'release', 'arch', 'install_date_time_t'])
expected_pkg_list = {
'glibc': [
@@ -315,51 +315,6 @@ class YumTestCase(TestCase, LoaderModuleMockMixin):
else:
self.assertItemsEqual(pkginfo, expected_pkg_list[pkgname])
- def test_list_patches(self):
- '''
- Test patches listing.
-
- :return:
- '''
- yum_out = [
- 'i my-fake-patch-not-installed-1234 recommended spacewalk-usix-2.7.5.2-2.2.noarch',
- ' my-fake-patch-not-installed-1234 recommended spacewalksd-5.0.26.2-21.2.x86_64',
- 'i my-fake-patch-not-installed-1234 recommended suseRegisterInfo-3.1.1-18.2.x86_64',
- 'i my-fake-patch-installed-1234 recommended my-package-one-1.1-0.1.x86_64',
- 'i my-fake-patch-installed-1234 recommended my-package-two-1.1-0.1.x86_64',
- ]
-
- expected_patches = {
- 'my-fake-patch-not-installed-1234': {
- 'installed': False,
- 'summary': [
- 'spacewalk-usix-2.7.5.2-2.2.noarch',
- 'spacewalksd-5.0.26.2-21.2.x86_64',
- 'suseRegisterInfo-3.1.1-18.2.x86_64',
- ]
- },
- 'my-fake-patch-installed-1234': {
- 'installed': True,
- 'summary': [
- 'my-package-one-1.1-0.1.x86_64',
- 'my-package-two-1.1-0.1.x86_64',
- ]
- }
- }
-
- with patch.dict(yumpkg.__grains__, {'osarch': 'x86_64'}), \
- patch.dict(yumpkg.__salt__, {'cmd.run_stdout': MagicMock(return_value=os.linesep.join(yum_out))}):
- patches = yumpkg.list_patches()
- self.assertFalse(patches['my-fake-patch-not-installed-1234']['installed'])
- self.assertTrue(len(patches['my-fake-patch-not-installed-1234']['summary']) == 3)
- for _patch in expected_patches['my-fake-patch-not-installed-1234']['summary']:
- self.assertTrue(_patch in patches['my-fake-patch-not-installed-1234']['summary'])
-
- self.assertTrue(patches['my-fake-patch-installed-1234']['installed'])
- self.assertTrue(len(patches['my-fake-patch-installed-1234']['summary']) == 2)
- for _patch in expected_patches['my-fake-patch-installed-1234']['summary']:
- self.assertTrue(_patch in patches['my-fake-patch-installed-1234']['summary'])
-
def test_latest_version_with_options(self):
with patch.object(yumpkg, 'list_pkgs', MagicMock(return_value={})):
diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py
index 78414ca4ac..b3162f10cd 100644
--- a/tests/unit/modules/test_zypperpkg.py
+++ b/tests/unit/modules/test_zypperpkg.py
@@ -607,7 +607,7 @@ Repository 'DUMMY' not found by its alias, number, or URI.
patch.dict(zypper.__salt__, {'pkg_resource.add_pkg': _add_data}), \
patch.dict(zypper.__salt__, {'pkg_resource.format_pkg_list': pkg_resource.format_pkg_list}), \
patch.dict(zypper.__salt__, {'pkg_resource.stringify': MagicMock()}), \
- patch.dict(pkg_resource.__salt__, {'pkg.parse_arch': zypper.parse_arch}):
+ patch.dict(pkg_resource.__salt__, {'pkg.parse_arch_from_name': zypper.parse_arch_from_name}):
pkgs = zypper.list_pkgs(attr=['epoch', 'release', 'arch', 'install_date_time_t'])
self.assertFalse(pkgs.get('gpg-pubkey', False))
for pkg_name, pkg_attr in {
@@ -698,7 +698,7 @@ Repository 'DUMMY' not found by its alias, number, or URI.
patch.dict(zypper.__salt__, {'pkg_resource.add_pkg': _add_data}), \
patch.dict(zypper.__salt__, {'pkg_resource.format_pkg_list': pkg_resource.format_pkg_list}), \
patch.dict(zypper.__salt__, {'pkg_resource.stringify': MagicMock()}), \
- patch.dict(pkg_resource.__salt__, {'pkg.parse_arch': zypper.parse_arch}):
+ patch.dict(pkg_resource.__salt__, {'pkg.parse_arch_from_name': zypper.parse_arch_from_name}):
pkgs = zypper.list_pkgs(attr=['epoch', 'release', 'arch', 'install_date_time_t'])
expected_pkg_list = {
'glibc': [
--
2.16.4
++++++ remove-deprecated-usage-of-no_mock-and-no_mock_reaso.patch ++++++
>From 25b4e3ea983b2606b2fb3d3c0e42f9840208bf84 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 11 Mar 2020 16:14:16 +0000
Subject: [PATCH] Remove deprecated usage of NO_MOCK and NO_MOCK_REASON
---
tests/integration/pillar/test_git_pillar.py | 1 -
tests/unit/cli/test_batch_async.py | 3 +--
tests/unit/cli/test_support.py | 6 +-----
tests/unit/modules/test_cmdmod.py | 1 -
tests/unit/modules/test_kubeadm.py | 5 +----
tests/unit/modules/test_saltsupport.py | 4 +---
tests/unit/modules/test_xfs.py | 3 ---
tests/unit/states/test_btrfs.py | 3 ---
tests/unit/utils/test_pkg.py | 3 +--
9 files changed, 5 insertions(+), 24 deletions(-)
diff --git a/tests/integration/pillar/test_git_pillar.py b/tests/integration/pillar/test_git_pillar.py
index d417a7ebc3..9218f28d15 100644
--- a/tests/integration/pillar/test_git_pillar.py
+++ b/tests/integration/pillar/test_git_pillar.py
@@ -1383,7 +1383,6 @@ class TestPygit2SSH(GitPillarSSHTestBase):
)
-@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(_windows_or_mac(), 'minion is windows or mac')
@skip_if_not_root
@skipIf(not HAS_PYGIT2, 'pygit2 >= {0} and libgit2 >= {1} required'.format(PYGIT2_MINVER, LIBGIT2_MINVER))
diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py
index 635dc689a8..0c66550d5b 100644
--- a/tests/unit/cli/test_batch_async.py
+++ b/tests/unit/cli/test_batch_async.py
@@ -8,10 +8,9 @@ from salt.cli.batch_async import BatchAsync
import salt.ext.tornado
from salt.ext.tornado.testing import AsyncTestCase
from tests.support.unit import skipIf, TestCase
-from tests.support.mock import MagicMock, patch, NO_MOCK, NO_MOCK_REASON
+from tests.support.mock import MagicMock, patch
-@skipIf(NO_MOCK, NO_MOCK_REASON)
class AsyncBatchTestCase(AsyncTestCase, TestCase):
def setUp(self):
diff --git a/tests/unit/cli/test_support.py b/tests/unit/cli/test_support.py
index 85ea957d79..8d8c1cb11f 100644
--- a/tests/unit/cli/test_support.py
+++ b/tests/unit/cli/test_support.py
@@ -6,7 +6,7 @@
from __future__ import absolute_import, print_function, unicode_literals
from tests.support.unit import skipIf, TestCase
-from tests.support.mock import MagicMock, patch, NO_MOCK, NO_MOCK_REASON
+from tests.support.mock import MagicMock, patch
from salt.cli.support.console import IndentOutput
from salt.cli.support.collector import SupportDataCollector, SaltSupport
@@ -26,7 +26,6 @@ except ImportError:
@skipIf(not bool(pytest), 'Pytest needs to be installed')
-@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltSupportIndentOutputTestCase(TestCase):
'''
Unit Tests for the salt-support indent output.
@@ -90,7 +89,6 @@ class SaltSupportIndentOutputTestCase(TestCase):
@skipIf(not bool(pytest), 'Pytest needs to be installed')
-@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltSupportCollectorTestCase(TestCase):
'''
Collector tests.
@@ -211,7 +209,6 @@ class SaltSupportCollectorTestCase(TestCase):
@skipIf(not bool(pytest), 'Pytest needs to be installed')
-@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltSupportRunnerTestCase(TestCase):
'''
Test runner class.
@@ -404,7 +401,6 @@ class SaltSupportRunnerTestCase(TestCase):
@skipIf(not bool(pytest), 'Pytest needs to be installed')
-@skipIf(NO_MOCK, NO_MOCK_REASON)
class ProfileIntegrityTestCase(TestCase):
'''
Default profile integrity
diff --git a/tests/unit/modules/test_cmdmod.py b/tests/unit/modules/test_cmdmod.py
index 8d763435f8..3d13fb9290 100644
--- a/tests/unit/modules/test_cmdmod.py
+++ b/tests/unit/modules/test_cmdmod.py
@@ -37,7 +37,6 @@ MOCK_SHELL_FILE = '# List of acceptable shells\n' \
'/bin/bash\n'
-@skipIf(NO_MOCK, NO_MOCK_REASON)
class CMDMODTestCase(TestCase, LoaderModuleMockMixin):
'''
Unit tests for the salt.modules.cmdmod module
diff --git a/tests/unit/modules/test_kubeadm.py b/tests/unit/modules/test_kubeadm.py
index a58f54f118..f17ba4ad64 100644
--- a/tests/unit/modules/test_kubeadm.py
+++ b/tests/unit/modules/test_kubeadm.py
@@ -29,16 +29,13 @@ from tests.support.mixins import LoaderModuleMockMixin
from tests.support.unit import TestCase, skipIf
from tests.support.mock import (
MagicMock,
- patch,
- NO_MOCK,
- NO_MOCK_REASON
+ patch
)
import salt.modules.kubeadm as kubeadm
from salt.exceptions import CommandExecutionError
-@skipIf(NO_MOCK, NO_MOCK_REASON)
class KubeAdmTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.modules.kubeadm
diff --git a/tests/unit/modules/test_saltsupport.py b/tests/unit/modules/test_saltsupport.py
index 7bd652a90e..75616ba949 100644
--- a/tests/unit/modules/test_saltsupport.py
+++ b/tests/unit/modules/test_saltsupport.py
@@ -9,7 +9,7 @@ from __future__ import absolute_import, print_function, unicode_literals
# Import Salt Testing Libs
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.unit import TestCase, skipIf
-from tests.support.mock import patch, MagicMock, NO_MOCK, NO_MOCK_REASON
+from tests.support.mock import patch, MagicMock
from salt.modules import saltsupport
import salt.exceptions
import datetime
@@ -21,7 +21,6 @@ except ImportError:
@skipIf(not bool(pytest), 'Pytest required')
-@skipIf(NO_MOCK, NO_MOCK_REASON)
class SaltSupportModuleTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.modules.support::SaltSupportModule
@@ -289,7 +288,6 @@ professor: Farnsworth
@skipIf(not bool(pytest), 'Pytest required')
-@skipIf(NO_MOCK, NO_MOCK_REASON)
class LogCollectorTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.modules.support::LogCollector
diff --git a/tests/unit/modules/test_xfs.py b/tests/unit/modules/test_xfs.py
index 4b423d69d1..d680c4e317 100644
--- a/tests/unit/modules/test_xfs.py
+++ b/tests/unit/modules/test_xfs.py
@@ -8,8 +8,6 @@ import textwrap
from tests.support.mixins import LoaderModuleMockMixin
from tests.support.unit import skipIf, TestCase
from tests.support.mock import (
- NO_MOCK,
- NO_MOCK_REASON,
MagicMock,
patch)
@@ -17,7 +15,6 @@ from tests.support.mock import (
import salt.modules.xfs as xfs
-@skipIf(NO_MOCK, NO_MOCK_REASON)
@patch('salt.modules.xfs._get_mounts', MagicMock(return_value={}))
class XFSTestCase(TestCase, LoaderModuleMockMixin):
'''
diff --git a/tests/unit/states/test_btrfs.py b/tests/unit/states/test_btrfs.py
index 3f45ed94f9..c68f6279dc 100644
--- a/tests/unit/states/test_btrfs.py
+++ b/tests/unit/states/test_btrfs.py
@@ -32,8 +32,6 @@ from tests.support.mixins import LoaderModuleMockMixin
from tests.support.unit import skipIf, TestCase
from tests.support.mock import (
MagicMock,
- NO_MOCK,
- NO_MOCK_REASON,
patch,
)
@@ -43,7 +41,6 @@ import salt.states.btrfs as btrfs
import pytest
-@skipIf(NO_MOCK, NO_MOCK_REASON)
class BtrfsTestCase(TestCase, LoaderModuleMockMixin):
'''
Test cases for salt.states.btrfs
diff --git a/tests/unit/utils/test_pkg.py b/tests/unit/utils/test_pkg.py
index 361e0bf92f..38c0cb8f84 100644
--- a/tests/unit/utils/test_pkg.py
+++ b/tests/unit/utils/test_pkg.py
@@ -3,7 +3,7 @@
from __future__ import absolute_import, unicode_literals, print_function
from tests.support.unit import TestCase, skipIf
-from tests.support.mock import Mock, MagicMock, patch, NO_MOCK, NO_MOCK_REASON
+from tests.support.mock import Mock, MagicMock, patch
import salt.utils.pkg
from salt.utils.pkg import rpm
@@ -13,7 +13,6 @@ except ImportError:
pytest = None
-@skipIf(NO_MOCK, NO_MOCK_REASON)
@skipIf(pytest is None, 'PyTest is missing')
class PkgRPMTestCase(TestCase):
'''
--
2.23.0
++++++ remove-unnecessary-yield-causing-badyielderror-bsc-1.patch ++++++
>From bec0a06a069404c5043b1c59e3fe7cce2df177d3 Mon Sep 17 00:00:00 2001
From: Mihai Dinca <mdinca(a)suse.de>
Date: Wed, 30 Oct 2019 10:19:12 +0100
Subject: [PATCH] Remove unnecessary yield causing BadYieldError
(bsc#1154620)
---
salt/cli/batch_async.py | 2 --
1 file changed, 2 deletions(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index 6d0dca1da5..754c257b36 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -227,7 +227,6 @@ class BatchAsync(object):
self.event.unsubscribe(pattern, match_type='glob')
del self
gc.collect()
- yield
@tornado.gen.coroutine
def schedule_next(self):
@@ -263,7 +262,6 @@ class BatchAsync(object):
else:
yield self.end_batch()
gc.collect()
- yield
def __del__(self):
self.local = None
--
2.16.4
++++++ remove-vendored-backports-abc-from-requirements.patch ++++++
>From 4f80e969e31247a4755d98d25f29b5d8b1b916c3 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Mon, 27 Apr 2020 16:37:38 +0100
Subject: [PATCH] Remove vendored 'backports-abc' from requirements
---
requirements/base.txt | 1 -
1 file changed, 1 deletion(-)
diff --git a/requirements/base.txt b/requirements/base.txt
index 922aec4c754178fd5c317ed636a0ebe487fcb25d..8adf76a2a045f4fca8695c584fedcfc913f54db2 100644
--- a/requirements/base.txt
+++ b/requirements/base.txt
@@ -4,7 +4,6 @@ PyYAML
MarkupSafe
requests>=1.0.0
# Requirements for Tornado 4.5.3 (vendored as salt.ext.tornado)
-backports-abc==0.5; python_version < '3.0'
singledispatch==3.4.0.3; python_version < '3.4'
# Required by Tornado to handle threads stuff.
futures>=2.0; python_version < '3.0'
--
2.23.0
++++++ removes-unresolved-merge-conflict-in-yumpkg-module.patch ++++++
>From 93c0630b84b9da89acaf549a5c79e5d834c70a65 Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Thu, 5 Mar 2020 21:01:31 +0100
Subject: [PATCH] Removes unresolved merge conflict in yumpkg module
---
salt/modules/yumpkg.py | 4 ----
1 file changed, 4 deletions(-)
diff --git a/salt/modules/yumpkg.py b/salt/modules/yumpkg.py
index 88d74020b3..04ab240cd4 100644
--- a/salt/modules/yumpkg.py
+++ b/salt/modules/yumpkg.py
@@ -3220,11 +3220,7 @@ def _get_patches(installed_only=False):
for line in salt.utils.itertools.split(ret, os.linesep):
inst, advisory_id, sev, pkg = re.match(r'([i|\s]) ([^\s]+) +([^\s]+) +([^\s]+)',
line).groups()
-<<<<<<< HEAD
if advisory_id not in patches:
-=======
- if not advisory_id in patches:
->>>>>>> Do not report patches as installed when not all the related packages are installed (bsc#1128061)
patches[advisory_id] = {
'installed': True if inst == 'i' else False,
'summary': [pkg]
--
2.16.4
++++++ restore-default-behaviour-of-pkg-list-return.patch ++++++
>From 8f9478ffba672767e77b9b263f279e0379ab1ed1 Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Fri, 30 Aug 2019 14:20:06 +0200
Subject: [PATCH] Restore default behaviour of pkg list return
The default behaviour for pkg list return was to not include patches,
even when installing patches. Only the packages where returned. There
is now parameter to also return patches if that is needed.
Co-authored-by: Mihai Dinca <mdinca(a)suse.de>
---
salt/modules/zypperpkg.py | 32 +++++++++++++++++++++++---------
1 file changed, 23 insertions(+), 9 deletions(-)
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index 8179cd8c1d..f7158e0810 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -1304,8 +1304,10 @@ def refresh_db(root=None):
return ret
-def _find_types(pkgs):
+def _detect_includes(pkgs, inclusion_detection):
'''Form a package names list, find prefixes of packages types.'''
+ if not inclusion_detection:
+ return None
return sorted({pkg.split(':', 1)[0] for pkg in pkgs
if len(pkg.split(':', 1)) == 2})
@@ -1321,6 +1323,7 @@ def install(name=None,
ignore_repo_failure=False,
no_recommends=False,
root=None,
+ inclusion_detection=False,
**kwargs):
'''
.. versionchanged:: 2015.8.12,2016.3.3,2016.11.0
@@ -1435,6 +1438,9 @@ def install(name=None,
.. versionadded:: 2018.3.0
+ inclusion_detection:
+ Detect ``includes`` based on ``sources``
+ By default packages are always included
Returns a dict containing the new package names and versions::
@@ -1500,7 +1506,8 @@ def install(name=None,
diff_attr = kwargs.get("diff_attr")
- includes = _find_types(targets)
+ includes = _detect_includes(targets, inclusion_detection)
+
old = list_pkgs(attr=diff_attr, root=root, includes=includes) if not downloadonly else list_downloaded(root)
downgrades = []
@@ -1692,7 +1699,7 @@ def upgrade(refresh=True,
return ret
-def _uninstall(name=None, pkgs=None, root=None):
+def _uninstall(inclusion_detection, name=None, pkgs=None, root=None):
'''
Remove and purge do identical things but with different Zypper commands,
this function performs the common logic.
@@ -1702,7 +1709,7 @@ def _uninstall(name=None, pkgs=None, root=None):
except MinionError as exc:
raise CommandExecutionError(exc)
- includes = _find_types(pkg_params.keys())
+ includes = _detect_includes(pkg_params.keys(), inclusion_detection)
old = list_pkgs(root=root, includes=includes)
targets = []
for target in pkg_params:
@@ -1761,7 +1768,7 @@ def normalize_name(name):
return name
-def remove(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused-argument
+def remove(name=None, pkgs=None, root=None, inclusion_detection=False, **kwargs): # pylint: disable=unused-argument
'''
.. versionchanged:: 2015.8.12,2016.3.3,2016.11.0
On minions running systemd>=205, `systemd-run(1)`_ is now used to
@@ -1792,8 +1799,11 @@ def remove(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused
root
Operate on a different root directory.
- .. versionadded:: 0.16.0
+ inclusion_detection:
+ Detect ``includes`` based on ``pkgs``
+ By default packages are always included
+ .. versionadded:: 0.16.0
Returns a dict containing the changes.
@@ -1805,10 +1815,10 @@ def remove(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused
salt '*' pkg.remove <package1>,<package2>,<package3>
salt '*' pkg.remove pkgs='["foo", "bar"]'
'''
- return _uninstall(name=name, pkgs=pkgs, root=root)
+ return _uninstall(inclusion_detection, name=name, pkgs=pkgs, root=root)
-def purge(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused-argument
+def purge(name=None, pkgs=None, root=None, inclusion_detection=False, **kwargs): # pylint: disable=unused-argument
'''
.. versionchanged:: 2015.8.12,2016.3.3,2016.11.0
On minions running systemd>=205, `systemd-run(1)`_ is now used to
@@ -1840,6 +1850,10 @@ def purge(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused-
root
Operate on a different root directory.
+ inclusion_detection:
+ Detect ``includes`` based on ``pkgs``
+ By default packages are always included
+
.. versionadded:: 0.16.0
@@ -1853,7 +1867,7 @@ def purge(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused-
salt '*' pkg.purge <package1>,<package2>,<package3>
salt '*' pkg.purge pkgs='["foo", "bar"]'
'''
- return _uninstall(name=name, pkgs=pkgs, root=root)
+ return _uninstall(inclusion_detection, name=name, pkgs=pkgs, root=root)
def list_locks(root=None):
--
2.16.4
++++++ return-the-expected-powerpc-os-arch-bsc-1117995.patch ++++++
>From 27e90d416b89ac2c7839e1d03ded37f86df7290f Mon Sep 17 00:00:00 2001
From: Mihai Dinca <mdinca(a)suse.de>
Date: Thu, 13 Dec 2018 12:17:35 +0100
Subject: [PATCH] Return the expected powerpc os arch (bsc#1117995)
---
salt/utils/pkg/rpm.py | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/salt/utils/pkg/rpm.py b/salt/utils/pkg/rpm.py
index bc5eb30eda..cb85eb99fe 100644
--- a/salt/utils/pkg/rpm.py
+++ b/salt/utils/pkg/rpm.py
@@ -52,9 +52,12 @@ def get_osarch():
stdout=subprocess.PIPE,
stderr=subprocess.PIPE).communicate()[0]
else:
- ret = ''.join([x for x in platform.uname()[-2:] if x][-1:])
-
- return salt.utils.stringutils.to_str(ret).strip() or 'unknown'
+ ret = ''.join(list(filter(None, platform.uname()[-2:]))[-1:])
+ ret = salt.utils.stringutils.to_str(ret).strip() or 'unknown'
+ ARCH_FIXES_MAPPING = {
+ "powerpc64le": "ppc64le"
+ }
+ return ARCH_FIXES_MAPPING.get(ret, ret)
def check_32(arch, osarch=None):
--
2.16.4
++++++ revert-changes-to-slspath-saltstack-salt-56341.patch ++++++
>From 43c0b0f181e5e25ae90a25d2f3bcf9465385015a Mon Sep 17 00:00:00 2001
From: Alexander Graul <agraul(a)suse.com>
Date: Mon, 27 Apr 2020 18:55:34 +0200
Subject: [PATCH] Revert changes to slspath (saltstack/salt#56341)
This was a breaking change in v3000 that was fixed in 3000.1
Fixes bsc#1170104
---
doc/ref/states/vars.rst | 9 +++---
salt/utils/templates.py | 2 ++
tests/unit/utils/test_templates.py | 46 ++++++++++++++++++++++++++++++
3 files changed, 53 insertions(+), 4 deletions(-)
diff --git a/doc/ref/states/vars.rst b/doc/ref/states/vars.rst
index 2b146f7dc15f7830237cad342e7cc44cbd73a0ff..1d8fc343984931148217bfce46ff1a854e70c80f 100644
--- a/doc/ref/states/vars.rst
+++ b/doc/ref/states/vars.rst
@@ -96,10 +96,11 @@ include option.
slspath
=======
-The `slspath` variable contains the path to the current sls file. The value
-of `slspath` in files referenced in the current sls depends on the reference
-method. For jinja includes `slspath` is the path to the current file. For
-salt includes `slspath` is the path to the included file.
+The `slspath` variable contains the path to the directory of the current sls
+file. The value of `slspath` in files referenced in the current sls depends on
+the reference method. For jinja includes `slspath` is the path to the current
+directory of the file. For salt includes `slspath` is the path to the directory
+of the included file.
.. code-block:: jinja
diff --git a/salt/utils/templates.py b/salt/utils/templates.py
index d026118269cdd78da9b101d1fa598e3b9a1cf6aa..98092a9d796032985f173d6463123281c60c2713 100644
--- a/salt/utils/templates.py
+++ b/salt/utils/templates.py
@@ -122,6 +122,8 @@ def wrap_tmpl_func(render_str):
slspath = context['sls'].replace('.', '/')
if tmplpath is not None:
context['tplpath'] = tmplpath
+ if not tmplpath.lower().replace('\\', '/').endswith('/init.sls'):
+ slspath = os.path.dirname(slspath)
template = tmplpath.replace('\\', '/')
i = template.rfind(slspath.replace('.', '/'))
if i != -1:
diff --git a/tests/unit/utils/test_templates.py b/tests/unit/utils/test_templates.py
index 3d5855dd21e82eb92351a9a847c3e4d132450cb2..b9d9eba24d5ea1d630bc1392beeccdd1233a1f7a 100644
--- a/tests/unit/utils/test_templates.py
+++ b/tests/unit/utils/test_templates.py
@@ -5,13 +5,16 @@ Unit tests for salt.utils.templates.py
# Import python libs
from __future__ import absolute_import, print_function, unicode_literals
+import os
import sys
import logging
# Import Salt libs
import salt.utils.templates
+import salt.utils.files
# Import Salt Testing Libs
+from tests.support.helpers import with_tempdir
from tests.support.unit import TestCase, skipIf
log = logging.getLogger(__name__)
@@ -181,3 +184,46 @@ class RenderTestCase(TestCase):
ctx['var'] = 'OK'
res = salt.utils.templates.render_cheetah_tmpl(tmpl, ctx)
self.assertEqual(res.strip(), 'OK')
+
+
+class MockRender(object):
+ def __call__(self, tplstr, context, tmplpath=None):
+ self.tplstr = tplstr
+ self.context = context
+ self.tmplpath = tmplpath
+ return tplstr
+
+
+class WrapRenderTestCase(TestCase):
+
+ @with_tempdir()
+ def test_wrap_issue_56119_a(self, tempdir):
+ slsfile = os.path.join(tempdir, 'foo')
+ with salt.utils.files.fopen(slsfile, 'w') as fp:
+ fp.write('{{ slspath }}')
+ context = {'opts': {}, 'saltenv': 'base', 'sls': 'foo.bar'}
+ render = MockRender()
+ wrapped = salt.utils.templates.wrap_tmpl_func(render)
+ res = wrapped(
+ slsfile,
+ context=context,
+ tmplpath='/tmp/foo/bar/init.sls'
+ )
+ assert render.context['slspath'] == 'foo/bar', render.context['slspath']
+ assert render.context['tpldir'] == 'foo/bar', render.context['tpldir']
+
+ @with_tempdir()
+ def test_wrap_issue_56119_b(self, tempdir):
+ slsfile = os.path.join(tempdir, 'foo')
+ with salt.utils.files.fopen(slsfile, 'w') as fp:
+ fp.write('{{ slspath }}')
+ context = {'opts': {}, 'saltenv': 'base', 'sls': 'foo.bar.bang'}
+ render = MockRender()
+ wrapped = salt.utils.templates.wrap_tmpl_func(render)
+ res = wrapped(
+ slsfile,
+ context=context,
+ tmplpath='/tmp/foo/bar/bang.sls'
+ )
+ assert render.context['slspath'] == 'foo/bar', render.context['slspath']
+ assert render.context['tpldir'] == 'foo/bar', render.context['tpldir']
--
2.23.0
++++++ run-salt-api-as-user-salt-bsc-1064520.patch ++++++
>From 4e9b3808b5a27fcdc857b26d73e0f6716243ca92 Mon Sep 17 00:00:00 2001
From: Christian Lanig <clanig(a)suse.com>
Date: Mon, 27 Nov 2017 13:10:26 +0100
Subject: [PATCH] Run salt-api as user salt (bsc#1064520)
---
pkg/salt-api.service | 1 +
1 file changed, 1 insertion(+)
diff --git a/pkg/salt-api.service b/pkg/salt-api.service
index 7ca582dfb4..bf513e4dbd 100644
--- a/pkg/salt-api.service
+++ b/pkg/salt-api.service
@@ -6,6 +6,7 @@ After=network.target
[Service]
Type=notify
NotifyAccess=all
+User=salt
LimitNOFILE=8192
ExecStart=/usr/bin/salt-api
TimeoutStopSec=3
--
2.16.4
++++++ run-salt-master-as-dedicated-salt-user.patch ++++++
>From 497acb852b0d4519984d981dfefdc0848c3e4159 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Klaus=20K=C3=A4mpf?= <kkaempf(a)suse.de>
Date: Wed, 20 Jan 2016 11:01:06 +0100
Subject: [PATCH] Run salt master as dedicated salt user
* Minion runs always as a root
---
conf/master | 3 ++-
pkg/salt-common.logrotate | 2 ++
2 files changed, 4 insertions(+), 1 deletion(-)
diff --git a/conf/master b/conf/master
index ce2e26872a..22a4a7bdb4 100644
--- a/conf/master
+++ b/conf/master
@@ -25,7 +25,8 @@
# permissions to allow the specified user to run the master. The exception is
# the job cache, which must be deleted if this user is changed. If the
# modified files cause conflicts, set verify_env to False.
-#user: root
+user: salt
+syndic_user: salt
# Tell the master to also use salt-ssh when running commands against minions.
#enable_ssh_minions: False
diff --git a/pkg/salt-common.logrotate b/pkg/salt-common.logrotate
index 3cd002308e..0d99d1b801 100644
--- a/pkg/salt-common.logrotate
+++ b/pkg/salt-common.logrotate
@@ -1,4 +1,5 @@
/var/log/salt/master {
+ su salt salt
weekly
missingok
rotate 7
@@ -15,6 +16,7 @@
}
/var/log/salt/key {
+ su salt salt
weekly
missingok
rotate 7
--
2.16.4
++++++ salt-tmpfiles.d ++++++
# Type Path Mode UID GID Age Argument
d /var/run/salt 0750 root salt
d /var/run/salt/master 0750 salt salt
d /var/run/salt/minion 0750 root root
++++++ sanitize-grains-loaded-from-roster_grains.json.patch ++++++
>From 83a2a79ed3834a1cfd90941d0075d1c38341dc1d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Wed, 1 Apr 2020 12:27:30 +0100
Subject: [PATCH] Sanitize grains loaded from roster_grains.json
Ensure _format_cached_grains is called on state.pkg test
---
salt/modules/state.py | 3 ++-
tests/unit/modules/test_state.py | 4 +++-
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/salt/modules/state.py b/salt/modules/state.py
index ec1e1edb42e9d8d5bc1e991434eb187e3b65ab89..a4f3f8c37086a79a60f85b5ca4b71d2af1e1f90f 100644
--- a/salt/modules/state.py
+++ b/salt/modules/state.py
@@ -43,6 +43,7 @@ import salt.defaults.exitcodes
from salt.exceptions import CommandExecutionError, SaltInvocationError
from salt.runners.state import orchestrate as _orchestrate
from salt.utils.odict import OrderedDict
+from salt.loader import _format_cached_grains
# Import 3rd-party libs
from salt.ext import six
@@ -2188,7 +2189,7 @@ def pkg(pkg_path,
roster_grains_json = os.path.join(root, 'roster_grains.json')
if os.path.isfile(roster_grains_json):
with salt.utils.files.fopen(roster_grains_json, 'r') as fp_:
- roster_grains = salt.utils.json.load(fp_)
+ roster_grains = _format_cached_grains(salt.utils.json.load(fp_))
if os.path.isfile(roster_grains_json):
popts['grains'] = roster_grains
diff --git a/tests/unit/modules/test_state.py b/tests/unit/modules/test_state.py
index e3c3dc8fc62efa848603082c3d8f3a8f09d5c426..cda846595eeec9788d17b55fcad5cab7a49a62c2 100644
--- a/tests/unit/modules/test_state.py
+++ b/tests/unit/modules/test_state.py
@@ -1129,8 +1129,10 @@ class StateTestCase(TestCase, LoaderModuleMockMixin):
MockTarFile.path = ""
with patch('salt.utils.files.fopen', mock_open()), \
- patch.object(salt.utils.json, 'loads', mock_json_loads_true):
+ patch.object(salt.utils.json, 'loads', mock_json_loads_true), \
+ patch.object(state, '_format_cached_grains', MagicMock()):
self.assertEqual(state.pkg(tar_file, 0, "md5"), True)
+ state._format_cached_grains.assert_called_once()
MockTarFile.path = ""
if six.PY2:
--
2.23.0
++++++ strip-trailing-from-repo.uri-when-comparing-repos-in.patch ++++++
>From f2b465f41575a8a28d4762f9647ea30df6a64637 Mon Sep 17 00:00:00 2001
From: Matei Albu <malbu(a)suse.de>
Date: Fri, 15 Feb 2019 14:34:13 +0100
Subject: [PATCH] Strip trailing "/" from repo.uri when comparing repos
in apktpkg.mod_repo (bsc#1146192)
(cherry picked from commit af85627)
---
salt/modules/aptpkg.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py
index a5b039fc79..bafad40efe 100644
--- a/salt/modules/aptpkg.py
+++ b/salt/modules/aptpkg.py
@@ -2365,7 +2365,7 @@ def mod_repo(repo, saltenv='base', **kwargs):
# and the resulting source line. The idea here is to ensure
# we are not returning bogus data because the source line
# has already been modified on a previous run.
- repo_matches = source.type == repo_type and source.uri == repo_uri and source.dist == repo_dist
+ repo_matches = source.type == repo_type and source.uri.rstrip('/') == repo_uri.rstrip('/') and source.dist == repo_dist
kw_matches = source.dist == kw_dist and source.type == kw_type
if repo_matches or kw_matches:
--
2.16.4
++++++ support-config-non-root-permission-issues-fixes-u-50.patch ++++++
>From be2f4d3da3612ca02f215f987e4055d2bd177a7b Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Wed, 17 Oct 2018 14:10:47 +0200
Subject: [PATCH] Support-config non-root permission issues fixes
(U#50095)
Do not crash if there is no configuration available at all
Handle CLI and log errors
Catch overwriting exiting archive error by other users
Suppress excessive tracebacks on error log level
---
salt/cli/support/collector.py | 39 ++++++++++++++++++++++++++++++++++++---
salt/utils/parsers.py | 2 +-
2 files changed, 37 insertions(+), 4 deletions(-)
diff --git a/salt/cli/support/collector.py b/salt/cli/support/collector.py
index 478d07e13b..a4343297b6 100644
--- a/salt/cli/support/collector.py
+++ b/salt/cli/support/collector.py
@@ -125,6 +125,31 @@ class SupportDataCollector(object):
self.__current_section = []
self.__current_section_name = name
+ def _printout(self, data, output):
+ '''
+ Use salt outputter to printout content.
+
+ :return:
+ '''
+ opts = {'extension_modules': '', 'color': False}
+ try:
+ printout = salt.output.get_printout(output, opts)(data)
+ if printout is not None:
+ return printout.rstrip()
+ except (KeyError, AttributeError, TypeError) as err:
+ log.debug(err, exc_info=True)
+ try:
+ printout = salt.output.get_printout('nested', opts)(data)
+ if printout is not None:
+ return printout.rstrip()
+ except (KeyError, AttributeError, TypeError) as err:
+ log.debug(err, exc_info=True)
+ printout = salt.output.get_printout('raw', opts)(data)
+ if printout is not None:
+ return printout.rstrip()
+
+ return salt.output.try_printout(data, output, opts)
+
def write(self, title, data, output=None):
'''
Add a data to the current opened section.
@@ -138,7 +163,7 @@ class SupportDataCollector(object):
try:
if isinstance(data, dict) and 'return' in data:
data = data['return']
- content = salt.output.try_printout(data, output, {'extension_modules': '', 'color': False})
+ content = self._printout(data, output)
except Exception: # Fall-back to just raw YAML
content = None
else:
@@ -406,7 +431,11 @@ class SaltSupport(salt.utils.parsers.SaltSupportOptionParser):
and self.config.get('support_archive')
and os.path.exists(self.config['support_archive'])):
self.out.warning('Terminated earlier, cleaning up')
- os.unlink(self.config['support_archive'])
+ try:
+ os.unlink(self.config['support_archive'])
+ except Exception as err:
+ log.debug(err)
+ self.out.error('{} while cleaning up.'.format(err))
def _check_existing_archive(self):
'''
@@ -418,7 +447,11 @@ class SaltSupport(salt.utils.parsers.SaltSupportOptionParser):
if os.path.exists(self.config['support_archive']):
if self.config['support_archive_force_overwrite']:
self.out.warning('Overwriting existing archive: {}'.format(self.config['support_archive']))
- os.unlink(self.config['support_archive'])
+ try:
+ os.unlink(self.config['support_archive'])
+ except Exception as err:
+ log.debug(err)
+ self.out.error('{} while trying to overwrite existing archive.'.format(err))
ret = True
else:
self.out.warning('File {} already exists.'.format(self.config['support_archive']))
diff --git a/salt/utils/parsers.py b/salt/utils/parsers.py
index 83dfe717f6..5f98c73291 100644
--- a/salt/utils/parsers.py
+++ b/salt/utils/parsers.py
@@ -1972,7 +1972,7 @@ class SaltSupportOptionParser(six.with_metaclass(OptionParserMeta, OptionParser,
'''
_opts, _args = optparse.OptionParser.parse_args(self)
configs = self.find_existing_configs(_opts.support_unit)
- if cfg not in configs:
+ if configs and cfg not in configs:
cfg = configs[0]
return config.master_config(self.get_config_file_path(cfg))
--
2.16.4
++++++ support-for-btrfs-and-xfs-in-parted-and-mkfs.patch ++++++
>From 570b45e5a1f1786fe0f449a038f8f8a19b6b9ce2 Mon Sep 17 00:00:00 2001
From: Jochen Breuer <jbreuer(a)suse.de>
Date: Fri, 10 Jan 2020 17:18:14 +0100
Subject: [PATCH] Support for Btrfs and XFS in parted and mkfs
---
salt/modules/parted_partition.py | 4 ++--
tests/unit/modules/test_parted_partition.py | 16 ++++++++++++++++
2 files changed, 18 insertions(+), 2 deletions(-)
diff --git a/salt/modules/parted_partition.py b/salt/modules/parted_partition.py
index c991530aba..9441fec49f 100644
--- a/salt/modules/parted_partition.py
+++ b/salt/modules/parted_partition.py
@@ -390,8 +390,8 @@ def _is_fstype(fs_type):
:param fs_type: file system type
:return: True if fs_type is supported in this module, False otherwise
'''
- return fs_type in set(['ext2', 'ext3', 'ext4', 'fat32', 'fat16', 'linux-swap', 'reiserfs',
- 'hfs', 'hfs+', 'hfsx', 'NTFS', 'ntfs', 'ufs'])
+ return fs_type in set(['btrfs', 'ext2', 'ext3', 'ext4', 'fat32', 'fat16', 'linux-swap', 'reiserfs',
+ 'hfs', 'hfs+', 'hfsx', 'NTFS', 'ntfs', 'ufs', 'xfs'])
def mkfs(device, fs_type):
diff --git a/tests/unit/modules/test_parted_partition.py b/tests/unit/modules/test_parted_partition.py
index aad2829867..571e30292b 100644
--- a/tests/unit/modules/test_parted_partition.py
+++ b/tests/unit/modules/test_parted_partition.py
@@ -376,6 +376,22 @@ class PartedTestCase(TestCase, LoaderModuleMockMixin):
}
self.assertEqual(output, expected)
+ def test_btrfs_fstypes(self):
+ '''Tests if we see btrfs as valid fs type'''
+ with patch('salt.modules.parted_partition._validate_device', MagicMock()):
+ try:
+ parted.mkfs('/dev/foo', 'btrfs')
+ except CommandExecutionError:
+ self.fail("Btrfs is not in the supported fstypes")
+
+ def test_xfs_fstypes(self):
+ '''Tests if we see xfs as valid fs type'''
+ with patch('salt.modules.parted_partition._validate_device', MagicMock()):
+ try:
+ parted.mkfs('/dev/foo', 'xfs')
+ except CommandExecutionError:
+ self.fail("XFS is not in the supported fstypes")
+
def test_disk_set(self):
with patch('salt.modules.parted_partition._validate_device', MagicMock()):
self.cmdrun.return_value = ''
--
2.16.4
++++++ switch-firewalld-state-to-use-change_interface.patch ++++++
>From c48d54fe6243614aba481c887208e473f58a5057 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Mon, 20 May 2019 11:59:39 +0100
Subject: [PATCH] Switch firewalld state to use change_interface
firewalld.present state allows to bind interface to given zone.
However if the interface is already bound to some other zone, call-
ing `add_interface` will not change rebind the interface but report
error.
Option `change_interface` however can rebind the interface from one
zone to another.
This PR adds `firewalld.change_interface` call to firewalld module
and updates `firewalld.present` state to use this call.
---
salt/modules/firewalld.py | 23 +++++++++++++++++++++++
salt/states/firewalld.py | 4 ++--
2 files changed, 25 insertions(+), 2 deletions(-)
diff --git a/salt/modules/firewalld.py b/salt/modules/firewalld.py
index a6d90d38b8..c8b646024b 100644
--- a/salt/modules/firewalld.py
+++ b/salt/modules/firewalld.py
@@ -932,6 +932,29 @@ def remove_interface(zone, interface, permanent=True):
return __firewall_cmd(cmd)
+def change_interface(zone, interface, permanent=True):
+ '''
+ Change zone the interface bound to
+
+ .. versionadded:: 2019.?.?
+
+ CLI Example:
+
+ .. code-block:: bash
+
+ salt '*' firewalld.change_interface zone eth0
+ '''
+ if interface in get_interfaces(zone, permanent):
+ log.info('Interface is already bound to zone.')
+
+ cmd = '--zone={0} --change-interface={1}'.format(zone, interface)
+
+ if permanent:
+ cmd += ' --permanent'
+
+ return __firewall_cmd(cmd)
+
+
def get_sources(zone, permanent=True):
'''
List sources bound to a zone
diff --git a/salt/states/firewalld.py b/salt/states/firewalld.py
index 25cbad170a..e4338beaf2 100644
--- a/salt/states/firewalld.py
+++ b/salt/states/firewalld.py
@@ -633,8 +633,8 @@ def _present(name,
for interface in new_interfaces:
if not __opts__['test']:
try:
- __salt__['firewalld.add_interface'](name, interface,
- permanent=True)
+ __salt__['firewalld.change_interface'](name, interface,
+ permanent=True)
except CommandExecutionError as err:
ret['comment'] = 'Error: {0}'.format(err)
return ret
--
2.16.4
++++++ temporary-fix-extend-the-whitelist-of-allowed-comman.patch ++++++
>From 89c188107bc60d4e84879c3f3c2fde7489a14153 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Thu, 24 Jan 2019 18:12:35 +0100
Subject: [PATCH] temporary fix: extend the whitelist of allowed commands
---
salt/auth/__init__.py | 2 ++
1 file changed, 2 insertions(+)
diff --git a/salt/auth/__init__.py b/salt/auth/__init__.py
index 329e4a62c9..ecbd1c808c 100644
--- a/salt/auth/__init__.py
+++ b/salt/auth/__init__.py
@@ -47,6 +47,8 @@ AUTH_INTERNAL_KEYWORDS = frozenset([
'gather_job_timeout',
'kwarg',
'match',
+ "id_",
+ "force",
'metadata',
'print_event',
'raw',
--
2.16.4
++++++ travis.yml ++++++
language: python
python:
- '2.6'
- '2.7'
before_install:
- sudo apt-get update
- sudo apt-get install --fix-broken --ignore-missing -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" swig rabbitmq-server ruby python-apt mysql-server libmysqlclient-dev
- (git describe && git fetch --tags) || (git remote add upstream git://github.com/saltstack/salt.git && git fetch --tags upstream)
- pip install mock
- pip install --allow-external http://dl.dropbox.com/u/174789/m2crypto-0.20.1.tar.gz
- pip install --upgrade pep8 'pylint<=1.2.0'
- pip install --upgrade coveralls
- "if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install unittest2 ordereddict; fi"
- pip install git+https://github.com/saltstack/salt-testing.git#egg=SaltTesting
install:
- pip install -r requirements/zeromq.txt -r requirements/cloud.txt
- pip install --allow-all-external -r requirements/opt.txt
before_script:
- "/home/travis/virtualenv/python${TRAVIS_PYTHON_VERSION}/bin/pylint --rcfile=.testing.pylintrc salt/ && echo 'Finished Pylint Check Cleanly' || echo 'Finished Pylint Check With Errors'"
- "/home/travis/virtualenv/python${TRAVIS_PYTHON_VERSION}/bin/pep8 --ignore=E501,E12 salt/ && echo 'Finished PEP-8 Check Cleanly' || echo 'Finished PEP-8 Check With Errors'"
script: "sudo -E /home/travis/virtualenv/python${TRAVIS_PYTHON_VERSION}/bin/python setup.py test --runtests-opts='--run-destructive --sysinfo -v --coverage'"
after_success:
- coveralls
notifications:
irc:
channels: "irc.freenode.org#salt-devel"
on_success: change
on_failure: change
++++++ update-documentation.sh ++++++
#!/bin/bash
#
# Update html.tar.bz2 documentation tarball
# Author: Bo Maryniuk <bo(a)suse.de>
#
NO_SPHINX_PARAM="--without-sphinx"
function build_virtenv() {
virtualenv --system-site-packages $1
source $1/bin/activate
pip install --upgrade pip
if [ -z "$2" ]; then
pip install -I Sphinx
fi
}
function check_env() {
if [[ -z "$1" || "$1" != "$NO_SPHINX_PARAM" ]] && [ ! -z "$(which sphinx-build 2>/dev/null)" ]; then
cat <<EOF
You've installed Spinx globally. But it might be outdated or
clash with the version I am going to install into the temporary
virtual environment from PIP.
Please consider to remove Sphinx from your system, perhaps?
Or pass me "$NO_SPHINX_PARAM" param so I will try reusing yours
and see what happens. :)
EOF
exit 1;
fi
for cmd in "make" "quilt" "virtualenv" "pip"; do
if [ -z "$(which $cmd 2>/dev/null)" ]; then
echo "Error: '$cmd' is still missing. Install it, please."
exit 1;
fi
done
}
function quilt_setup() {
quilt setup salt.spec
cd $1
quilt push -a
}
function build_docs() {
cd $1
make html
rm _build/html/.buildinfo
cd _build/html
chmod -R -x+X *
cd ..
tar cvf - html | bzip2 > $2/html.tar.bz2
}
function write_changelog() {
mv salt.changes salt.changes.previous
TIME=$(date -u +'%a %b %d %T %Z %Y')
MAIL=$1
SEP="-------------------------------------------------------------------"
cat <<EOF > salt.changes
$SEP
$TIME - $MAIL
- Updated html.tar.bz2 documentation tarball.
EOF
cat salt.changes.previous >> salt.changes
rm salt.changes.previous
}
if [ -z "$1" ]; then
echo "Usage: $0 <your e-mail> [--without-sphinx]"
exit 1;
fi
check_env $2;
START=$(pwd)
V_ENV="sphinx_doc_gen"
V_TMP=$(mktemp -d)
for f in "salt.spec" "salt*tar.gz"; do
cp -v $f $V_TMP
done
cd $V_TMP;
build_virtenv $V_ENV $2;
SRC_DIR="salt-$(cat salt.spec | grep ^Version: | cut -d: -f2 | sed -e 's/[[:blank:]]//g')";
quilt_setup $SRC_DIR
build_docs doc $V_TMP
cd $START
mv $V_TMP/html.tar.bz2 $START
rm -rf $V_TMP
echo "Done"
echo "---------------"
++++++ use-adler32-algorithm-to-compute-string-checksums.patch ++++++
>From a8e3defcb484296e18343c6447649fe508ab2644 Mon Sep 17 00:00:00 2001
From: Bo Maryniuk <bo(a)suse.de>
Date: Sat, 28 Jul 2018 22:59:04 +0200
Subject: [PATCH] Use Adler32 algorithm to compute string checksums
Generate the same numeric value across all Python versions and platforms
Re-add getting hash by Python shell-out method
Add an option to choose between default hashing, Adler32 or CRC32 algorithms
Set default config option for server_id hashing to False on minion
Choose CRC method, default to faster but less reliable "adler32", if crc is in use
Add warning for Sodium.
---
salt/config/__init__.py | 7 ++++++-
salt/grains/core.py | 53 ++++++++++++++++++++++++++++++++-----------------
2 files changed, 41 insertions(+), 19 deletions(-)
diff --git a/salt/config/__init__.py b/salt/config/__init__.py
index 70b34ec949..0ebe1181dd 100644
--- a/salt/config/__init__.py
+++ b/salt/config/__init__.py
@@ -1190,6 +1190,10 @@ VALID_OPTS = immutabletypes.freeze({
# Allow raw_shell option when using the ssh
# client via the Salt API
'netapi_allow_raw_shell': bool,
+
+ # Use Adler32 hashing algorithm for server_id (default False until Sodium, "adler32" after)
+ # Possible values are: False, adler32, crc32
+ 'server_id_use_crc': (bool, six.string_types),
})
# default configurations
@@ -1480,7 +1484,8 @@ DEFAULT_MINION_OPTS = immutabletypes.freeze({
'minion_sign_messages': False,
'discovery': False,
'schedule': {},
- 'ssh_merge_pillar': True
+ 'ssh_merge_pillar': True,
+ 'server_id_use_crc': False,
})
DEFAULT_MASTER_OPTS = immutabletypes.freeze({
diff --git a/salt/grains/core.py b/salt/grains/core.py
index 2851809472..9c1b5d930e 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -20,6 +20,7 @@ import platform
import logging
import locale
import uuid
+import zlib
from errno import EACCES, EPERM
import datetime
import warnings
@@ -62,6 +63,7 @@ import salt.utils.path
import salt.utils.pkg.rpm
import salt.utils.platform
import salt.utils.stringutils
+import salt.utils.versions
from salt.ext import six
from salt.ext.six.moves import range
@@ -2792,40 +2794,55 @@ def _hw_data(osdata):
return grains
-def get_server_id():
+def _get_hash_by_shell():
'''
- Provides an integer based on the FQDN of a machine.
- Useful as server-id in MySQL replication or anywhere else you'll need an ID
- like this.
+ Shell-out Python 3 for compute reliable hash
+ :return:
'''
- # Provides:
- # server_id
-
- if salt.utils.platform.is_proxy():
- return {}
id_ = __opts__.get('id', '')
id_hash = None
py_ver = sys.version_info[:2]
if py_ver >= (3, 3):
# Python 3.3 enabled hash randomization, so we need to shell out to get
# a reliable hash.
- id_hash = __salt__['cmd.run'](
- [sys.executable, '-c', 'print(hash("{0}"))'.format(id_)],
- env={'PYTHONHASHSEED': '0'}
- )
+ id_hash = __salt__['cmd.run']([sys.executable, '-c', 'print(hash("{0}"))'.format(id_)],
+ env={'PYTHONHASHSEED': '0'})
try:
id_hash = int(id_hash)
except (TypeError, ValueError):
- log.debug(
- 'Failed to hash the ID to get the server_id grain. Result of '
- 'hash command: %s', id_hash
- )
+ log.debug('Failed to hash the ID to get the server_id grain. Result of hash command: %s', id_hash)
id_hash = None
if id_hash is None:
# Python < 3.3 or error encountered above
id_hash = hash(id_)
- return {'server_id': abs(id_hash % (2 ** 31))}
+ return abs(id_hash % (2 ** 31))
+
+
+def get_server_id():
+ '''
+ Provides an integer based on the FQDN of a machine.
+ Useful as server-id in MySQL replication or anywhere else you'll need an ID
+ like this.
+ '''
+ # Provides:
+ # server_id
+
+ if salt.utils.platform.is_proxy():
+ server_id = {}
+ else:
+ use_crc = __opts__.get('server_id_use_crc')
+ if bool(use_crc):
+ id_hash = getattr(zlib, use_crc, zlib.adler32)(__opts__.get('id', '').encode()) & 0xffffffff
+ else:
+ salt.utils.versions.warn_until('Sodium', 'This server_id is computed nor by Adler32 neither by CRC32. '
+ 'Please use "server_id_use_crc" option and define algorithm you'
+ 'prefer (default "Adler32"). The server_id will be computed with'
+ 'Adler32 by default.')
+ id_hash = _get_hash_by_shell()
+ server_id = {'server_id': id_hash}
+
+ return server_id
def get_master():
--
2.16.4
++++++ use-current-ioloop-for-the-localclient-instance-of-b.patch ++++++
>From 1ab46d5f9ed435021aa8eeb40ada984f42c8e93d Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Thu, 3 Oct 2019 15:19:02 +0100
Subject: [PATCH] Use current IOLoop for the LocalClient instance of
BatchAsync (bsc#1137642)
---
salt/cli/batch_async.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py
index 2bb50459c8..f9e736f804 100644
--- a/salt/cli/batch_async.py
+++ b/salt/cli/batch_async.py
@@ -52,7 +52,7 @@ class BatchAsync(object):
'''
def __init__(self, parent_opts, jid_gen, clear_load):
ioloop = tornado.ioloop.IOLoop.current()
- self.local = salt.client.get_local_client(parent_opts['conf_file'])
+ self.local = salt.client.get_local_client(parent_opts['conf_file'], io_loop=ioloop)
if 'gather_job_timeout' in clear_load['kwargs']:
clear_load['gather_job_timeout'] = clear_load['kwargs'].pop('gather_job_timeout')
else:
@@ -266,6 +266,7 @@ class BatchAsync(object):
yield
def __del__(self):
+ self.local = None
self.event = None
self.ioloop = None
gc.collect()
--
2.16.4
++++++ use-full-option-name-instead-of-undocumented-abbrevi.patch ++++++
>From c4742f553fe60aee82577622def1eeca0e2abf93 Mon Sep 17 00:00:00 2001
From: Michael Calmer <mc(a)suse.de>
Date: Sun, 1 Mar 2020 16:22:54 +0100
Subject: [PATCH] use full option name instead of undocumented
abbreviation
---
salt/modules/zypperpkg.py | 2 +-
tests/unit/modules/test_zypperpkg.py | 14 +++++++++++++-
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index 0c15214e5e..e3f802a911 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -2498,7 +2498,7 @@ def list_products(all=False, refresh=False, root=None):
OEM_PATH = os.path.join(root, os.path.relpath(OEM_PATH, os.path.sep))
cmd = list()
if not all:
- cmd.append('--disable-repos')
+ cmd.append('--disable-repositories')
cmd.append('products')
if not all:
cmd.append('-i')
diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py
index 76937cc358..2a8e753b9d 100644
--- a/tests/unit/modules/test_zypperpkg.py
+++ b/tests/unit/modules/test_zypperpkg.py
@@ -238,7 +238,18 @@ class ZypperTestCase(TestCase, LoaderModuleMockMixin):
'stdout': get_test_data(filename)
}
- with patch.dict(zypper.__salt__, {'cmd.run_all': MagicMock(return_value=ref_out)}):
+ cmd_run_all = MagicMock(return_value=ref_out)
+ mock_call = call(['zypper',
+ '--non-interactive',
+ '--xmlout',
+ '--no-refresh',
+ '--disable-repositories',
+ 'products', u'-i'],
+ env={'ZYPP_READONLY_HACK': '1'},
+ output_loglevel='trace',
+ python_shell=False)
+
+ with patch.dict(zypper.__salt__, {'cmd.run_all': cmd_run_all}):
products = zypper.list_products()
self.assertEqual(len(products), 7)
self.assertIn(test_data['vendor'], [product['vendor'] for product in products])
@@ -247,6 +258,7 @@ class ZypperTestCase(TestCase, LoaderModuleMockMixin):
self.assertCountEqual(test_data[kwd], [prod.get(kwd) for prod in products])
else:
self.assertEqual(test_data[kwd], sorted([prod.get(kwd) for prod in products]))
+ cmd_run_all.assert_has_calls([mock_call])
def test_refresh_db(self):
'''
--
2.16.4
++++++ use-threadpool-from-multiprocessing.pool-to-avoid-le.patch ++++++
>From 1f50b796dd551c25a8fc87fe825d1508f340858e Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?=
<psuarezhernandez(a)suse.com>
Date: Tue, 30 Apr 2019 10:51:42 +0100
Subject: [PATCH] Use ThreadPool from multiprocessing.pool to avoid
leakings
---
salt/grains/core.py | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/salt/grains/core.py b/salt/grains/core.py
index 4600f055dd..f1e3ebe9d2 100644
--- a/salt/grains/core.py
+++ b/salt/grains/core.py
@@ -27,7 +27,7 @@ import datetime
import warnings
import time
-from multiprocessing.dummy import Pool as ThreadPool
+from multiprocessing.pool import ThreadPool
# pylint: disable=import-error
try:
@@ -2300,10 +2300,14 @@ def fqdns():
# Create a ThreadPool to process the underlying calls to 'socket.gethostbyaddr' in parallel.
# This avoid blocking the execution when the "fqdn" is not defined for certains IP addresses, which was causing
# that "socket.timeout" was reached multiple times secuencially, blocking execution for several seconds.
- pool = ThreadPool(8)
- results = pool.map(_lookup_fqdn, addresses)
- pool.close()
- pool.join()
+
+ try:
+ pool = ThreadPool(8)
+ results = pool.map(_lookup_fqdn, addresses)
+ pool.close()
+ pool.join()
+ except Exception as exc:
+ log.error("Exception while creating a ThreadPool for resolving FQDNs: %s", exc)
for item in results:
if item:
--
2.16.4
++++++ virt-adding-kernel-boot-parameters-to-libvirt-xml-55.patch ++++++
>From f8ccfae9908d6a1001d68a1b8e5e8cee495b5aef Mon Sep 17 00:00:00 2001
From: Larry Dewey <ldewey(a)suse.com>
Date: Tue, 7 Jan 2020 02:48:11 -0700
Subject: [PATCH] virt: adding kernel boot parameters to libvirt xml
#55245 (#197)
* virt: adding kernel boot parameters to libvirt xml
SUSE's autoyast and Red Hat's kickstart take advantage of kernel paths,
initrd paths, and kernel boot command line parameters. These changes
provide the option of using these, and will allow salt and
autoyast/kickstart to work together.
Signed-off-by: Larry Dewey <ldewey(a)suse.com>
* virt: Download linux and initrd
Signed-off-by: Larry Dewey <ldewey(a)suse.com>
---
salt/states/virt.py | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/salt/states/virt.py b/salt/states/virt.py
index 500509fcc0..55a9ad2616 100644
--- a/salt/states/virt.py
+++ b/salt/states/virt.py
@@ -367,6 +367,23 @@ def running(name,
.. versionadded:: 3000
+ :param boot:
+ Specifies kernel for the virtual machine, as well as boot parameters
+ for the virtual machine. This is an optionl parameter, and all of the
+ keys are optional within the dictionary. If a remote path is provided
+ to kernel or initrd, salt will handle the downloading of the specified
+ remote fild, and will modify the XML accordingly.
+
+ .. code-block:: python
+
+ {
+ 'kernel': '/root/f8-i386-vmlinuz',
+ 'initrd': '/root/f8-i386-initrd',
+ 'cmdline': 'console=ttyS0 ks=http://example.com/f8-i386/os/'
+ }
+
+ .. versionadded:: 3000
+
.. rubric:: Example States
Make sure an already-defined virtual machine called ``domain_name`` is running:
--
2.16.4
++++++ virt._get_domain-don-t-raise-an-exception-if-there-i.patch ++++++
>From ef376e2d9a8360367a9a214d8f50d56889f3a664 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat(a)suse.com>
Date: Tue, 17 Mar 2020 11:01:48 +0100
Subject: [PATCH] virt._get_domain: don't raise an exception if there
is no VM
Raising an exception if there is no VM in _get_domain makes sense if
looking for some VMs, but not when listing all VMs.
---
salt/modules/virt.py | 2 +-
tests/unit/modules/test_virt.py | 41 +++++++++++++++++++++++++++++++++
2 files changed, 42 insertions(+), 1 deletion(-)
diff --git a/salt/modules/virt.py b/salt/modules/virt.py
index f0820e882524e1ebaae335e0a72940d6ff85c1b2..c8e046a47ae76b50651871fe1d149590d5d1e930 100644
--- a/salt/modules/virt.py
+++ b/salt/modules/virt.py
@@ -268,7 +268,7 @@ def _get_domain(conn, *vms, **kwargs):
for id_ in conn.listDefinedDomains():
all_vms.append(id_)
- if not all_vms:
+ if vms and not all_vms:
raise CommandExecutionError('No virtual machines found.')
if vms:
diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py
index 8690154662f41c8d9699fba62fcda6d83208a7d7..3e9bd5ef49dfddc019f9b4da1b505d81018e7eed 100644
--- a/tests/unit/modules/test_virt.py
+++ b/tests/unit/modules/test_virt.py
@@ -3639,3 +3639,44 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin):
}
},
[backend for backend in backends if backend['name'] == 'netfs'][0]['options'])
+
+ def test_get_domain(self):
+ '''
+ Test the virt._get_domain function
+ '''
+ # Tests with no VM
+ self.mock_conn.listDomainsID.return_value = []
+ self.mock_conn.listDefinedDomains.return_value = []
+ self.assertEqual([], virt._get_domain(self.mock_conn))
+ self.assertRaisesRegex(CommandExecutionError, 'No virtual machines found.',
+ virt._get_domain, self.mock_conn, 'vm2')
+
+ # Test with active and inactive VMs
+ self.mock_conn.listDomainsID.return_value = [1]
+
+ def create_mock_vm(idx):
+ mock_vm = MagicMock()
+ mock_vm.name.return_value = 'vm{0}'.format(idx)
+ return mock_vm
+
+ mock_vms = [create_mock_vm(idx) for idx in range(3)]
+ self.mock_conn.lookupByID.return_value = mock_vms[0]
+ self.mock_conn.listDefinedDomains.return_value = ['vm1', 'vm2']
+
+ self.mock_conn.lookupByName.side_effect = mock_vms
+ self.assertEqual(mock_vms, virt._get_domain(self.mock_conn))
+
+ self.mock_conn.lookupByName.side_effect = None
+ self.mock_conn.lookupByName.return_value = mock_vms[0]
+ self.assertEqual(mock_vms[0], virt._get_domain(self.mock_conn, inactive=False))
+
+ self.mock_conn.lookupByName.return_value = None
+ self.mock_conn.lookupByName.side_effect = [mock_vms[1], mock_vms[2]]
+ self.assertEqual([mock_vms[1], mock_vms[2]], virt._get_domain(self.mock_conn, active=False))
+
+ self.mock_conn.reset_mock()
+ self.mock_conn.lookupByName.return_value = None
+ self.mock_conn.lookupByName.side_effect = [mock_vms[1], mock_vms[2]]
+ self.assertEqual([mock_vms[1], mock_vms[2]], virt._get_domain(self.mock_conn, 'vm1', 'vm2'))
+ self.assertRaisesRegex(CommandExecutionError, 'The VM "vm2" is not present',
+ virt._get_domain, self.mock_conn, 'vm2', inactive=False)
--
2.23.0
++++++ x509-fixes-111.patch ++++++
>From ebd4bada22dca8f384078e977202c0052a80f1fc Mon Sep 17 00:00:00 2001
From: Florian Bergmann <bergmannf(a)users.noreply.github.com>
Date: Fri, 14 Sep 2018 10:30:39 +0200
Subject: [PATCH] X509 fixes (#111)
* Return proper content type for the x509 certificate
* Remove parenthesis
* Remove extra-variables during the import
* Comment fix
* Remove double returns
* Change log level from trace to debug
* Remove 'pass' and add logging instead
* Remove unnecessary wrapping
Remove wrapping
* PEP 8: line too long
PEP8: line too long
* PEP8: Redefine RSAError variable in except clause
* Do not return None if name was not found
* Do not return None if no matched minions found
* Fix unit tests
---
salt/modules/publish.py | 8 +--
salt/modules/x509.py | 129 +++++++++++++++++++-----------------------------
salt/states/x509.py | 19 ++++---
3 files changed, 66 insertions(+), 90 deletions(-)
diff --git a/salt/modules/publish.py b/salt/modules/publish.py
index 1550aa39a8..f12f1cc947 100644
--- a/salt/modules/publish.py
+++ b/salt/modules/publish.py
@@ -82,10 +82,8 @@ def _publish(
in minion configuration but `via_master` was specified.')
else:
# Find the master in the list of master_uris generated by the minion base class
- matching_master_uris = [master for master
- in __opts__['master_uri_list']
- if '//{0}:'.format(via_master)
- in master]
+ matching_master_uris = [master for master in __opts__['master_uri_list']
+ if '//{0}:'.format(via_master) in master]
if not matching_master_uris:
raise SaltInvocationError('Could not find match for {0} in \
@@ -175,6 +173,8 @@ def _publish(
else:
return ret
+ return {}
+
def publish(tgt,
fun,
diff --git a/salt/modules/x509.py b/salt/modules/x509.py
index 1cdd912bfb..72ab3bb03e 100644
--- a/salt/modules/x509.py
+++ b/salt/modules/x509.py
@@ -39,14 +39,13 @@ from salt.state import STATE_INTERNAL_KEYWORDS as _STATE_INTERNAL_KEYWORDS
# Import 3rd Party Libs
try:
import M2Crypto
- HAS_M2 = True
except ImportError:
- HAS_M2 = False
+ M2Crypto = None
+
try:
import OpenSSL
- HAS_OPENSSL = True
except ImportError:
- HAS_OPENSSL = False
+ OpenSSL = None
__virtualname__ = 'x509'
@@ -84,10 +83,7 @@ def __virtual__():
'''
only load this module if m2crypto is available
'''
- if HAS_M2:
- return __virtualname__
- else:
- return (False, 'Could not load x509 module, m2crypto unavailable')
+ return __virtualname__ if M2Crypto is not None else False, 'Could not load x509 module, m2crypto unavailable'
class _Ctx(ctypes.Structure):
@@ -130,10 +126,8 @@ def _new_extension(name, value, critical=0, issuer=None, _pyfree=1):
doesn't support getting the publickeyidentifier from the issuer
to create the authoritykeyidentifier extension.
'''
- if name == 'subjectKeyIdentifier' and \
- value.strip('0123456789abcdefABCDEF:') is not '':
- raise salt.exceptions.SaltInvocationError(
- 'value must be precomputed hash')
+ if name == 'subjectKeyIdentifier' and value.strip('0123456789abcdefABCDEF:') is not '':
+ raise salt.exceptions.SaltInvocationError('value must be precomputed hash')
# ensure name and value are bytes
name = salt.utils.stringutils.to_str(name)
@@ -148,7 +142,7 @@ def _new_extension(name, value, critical=0, issuer=None, _pyfree=1):
x509_ext_ptr = M2Crypto.m2.x509v3_ext_conf(None, ctx, name, value)
lhash = None
except AttributeError:
- lhash = M2Crypto.m2.x509v3_lhash() # pylint: disable=no-member
+ lhash = M2Crypto.m2.x509v3_lhash() # pylint: disable=no-member
ctx = M2Crypto.m2.x509v3_set_conf_lhash(
lhash) # pylint: disable=no-member
# ctx not zeroed
@@ -199,10 +193,8 @@ def _get_csr_extensions(csr):
csrtempfile.flush()
csryaml = _parse_openssl_req(csrtempfile.name)
csrtempfile.close()
- if csryaml and 'Requested Extensions' in \
- csryaml['Certificate Request']['Data']:
- csrexts = \
- csryaml['Certificate Request']['Data']['Requested Extensions']
+ if csryaml and 'Requested Extensions' in csryaml['Certificate Request']['Data']:
+ csrexts = csryaml['Certificate Request']['Data']['Requested Extensions']
if not csrexts:
return ret
@@ -297,7 +289,7 @@ def _get_signing_policy(name):
signing_policy = policies.get(name)
if signing_policy:
return signing_policy
- return __salt__['config.get']('x509_signing_policies', {}).get(name)
+ return __salt__['config.get']('x509_signing_policies', {}).get(name) or {}
def _pretty_hex(hex_str):
@@ -336,9 +328,11 @@ def _text_or_file(input_):
'''
if _isfile(input_):
with salt.utils.files.fopen(input_) as fp_:
- return salt.utils.stringutils.to_str(fp_.read())
+ out = salt.utils.stringutils.to_str(fp_.read())
else:
- return salt.utils.stringutils.to_str(input_)
+ out = salt.utils.stringutils.to_str(input_)
+
+ return out
def _parse_subject(subject):
@@ -356,7 +350,7 @@ def _parse_subject(subject):
ret[nid_name] = val
nids.append(nid_num)
except TypeError as err:
- log.trace("Missing attribute '%s'. Error: %s", nid_name, err)
+ log.debug("Missing attribute '%s'. Error: %s", nid_name, err)
return ret
@@ -533,8 +527,8 @@ def get_pem_entries(glob_path):
if os.path.isfile(path):
try:
ret[path] = get_pem_entry(text=path)
- except ValueError:
- pass
+ except ValueError as err:
+ log.debug('Unable to get PEM entries from %s: %s', path, err)
return ret
@@ -612,8 +606,8 @@ def read_certificates(glob_path):
if os.path.isfile(path):
try:
ret[path] = read_certificate(certificate=path)
- except ValueError:
- pass
+ except ValueError as err:
+ log.debug('Unable to read certificate %s: %s', path, err)
return ret
@@ -642,12 +636,10 @@ def read_csr(csr):
# Get size returns in bytes. The world thinks of key sizes in bits.
'Subject': _parse_subject(csr.get_subject()),
'Subject Hash': _dec2hex(csr.get_subject().as_hash()),
- 'Public Key Hash': hashlib.sha1(csr.get_pubkey().get_modulus())\
- .hexdigest()
+ 'Public Key Hash': hashlib.sha1(csr.get_pubkey().get_modulus()).hexdigest(),
+ 'X509v3 Extensions': _get_csr_extensions(csr),
}
- ret['X509v3 Extensions'] = _get_csr_extensions(csr)
-
return ret
@@ -944,7 +936,7 @@ def create_crl( # pylint: disable=too-many-arguments,too-many-locals
# pyOpenSSL Note due to current limitations in pyOpenSSL it is impossible
# to specify a digest For signing the CRL. This will hopefully be fixed
# soon: https://github.com/pyca/pyopenssl/pull/161
- if not HAS_OPENSSL:
+ if OpenSSL is None:
raise salt.exceptions.SaltInvocationError(
'Could not load OpenSSL module, OpenSSL unavailable'
)
@@ -970,8 +962,7 @@ def create_crl( # pylint: disable=too-many-arguments,too-many-locals
continue
if 'revocation_date' not in rev_item:
- rev_item['revocation_date'] = datetime.datetime\
- .now().strftime('%Y-%m-%d %H:%M:%S')
+ rev_item['revocation_date'] = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
rev_date = datetime.datetime.strptime(
rev_item['revocation_date'], '%Y-%m-%d %H:%M:%S')
@@ -1013,8 +1004,9 @@ def create_crl( # pylint: disable=too-many-arguments,too-many-locals
try:
crltext = crl.export(**export_kwargs)
except (TypeError, ValueError):
- log.warning(
- 'Error signing crl with specified digest. Are you using pyopenssl 0.15 or newer? The default md5 digest will be used.')
+ log.warning('Error signing crl with specified digest. '
+ 'Are you using pyopenssl 0.15 or newer? '
+ 'The default md5 digest will be used.')
export_kwargs.pop('digest', None)
crltext = crl.export(**export_kwargs)
@@ -1052,8 +1044,7 @@ def sign_remote_certificate(argdic, **kwargs):
if 'signing_policy' in argdic:
signing_policy = _get_signing_policy(argdic['signing_policy'])
if not signing_policy:
- return 'Signing policy {0} does not exist.'.format(
- argdic['signing_policy'])
+ return 'Signing policy {0} does not exist.'.format(argdic['signing_policy'])
if isinstance(signing_policy, list):
dict_ = {}
@@ -1093,6 +1084,7 @@ def get_signing_policy(signing_policy_name):
signing_policy = _get_signing_policy(signing_policy_name)
if not signing_policy:
return 'Signing policy {0} does not exist.'.format(signing_policy_name)
+
if isinstance(signing_policy, list):
dict_ = {}
for item in signing_policy:
@@ -1105,10 +1097,9 @@ def get_signing_policy(signing_policy_name):
pass
try:
- signing_policy['signing_cert'] = get_pem_entry(
- signing_policy['signing_cert'], 'CERTIFICATE')
+ signing_policy['signing_cert'] = get_pem_entry(signing_policy['signing_cert'], 'CERTIFICATE')
except KeyError:
- pass
+ log.debug('Unable to get "certificate" PEM entry')
return signing_policy
@@ -1358,8 +1349,7 @@ def create_certificate(
salt '*' x509.create_certificate path=/etc/pki/myca.crt signing_private_key='/etc/pki/myca.key' csr='/etc/pki/myca.csr'}
'''
- if not path and not text and \
- ('testrun' not in kwargs or kwargs['testrun'] is False):
+ if not path and not text and ('testrun' not in kwargs or kwargs['testrun'] is False):
raise salt.exceptions.SaltInvocationError(
'Either path or text must be specified.')
if path and text:
@@ -1504,8 +1494,7 @@ def create_certificate(
continue
# Use explicitly set values first, fall back to CSR values.
- extval = kwargs.get(extname) or kwargs.get(extlongname) or \
- csrexts.get(extname) or csrexts.get(extlongname)
+ extval = kwargs.get(extname) or kwargs.get(extlongname) or csrexts.get(extname) or csrexts.get(extlongname)
critical = False
if extval.startswith('critical '):
@@ -1627,8 +1616,8 @@ def create_csr(path=None, text=False, **kwargs):
if 'private_key' not in kwargs and 'public_key' in kwargs:
kwargs['private_key'] = kwargs['public_key']
- log.warning(
- "OpenSSL no longer allows working with non-signed CSRs. A private_key must be specified. Attempting to use public_key as private_key")
+ log.warning("OpenSSL no longer allows working with non-signed CSRs. "
+ "A private_key must be specified. Attempting to use public_key as private_key")
if 'private_key' not in kwargs:
raise salt.exceptions.SaltInvocationError('private_key is required')
@@ -1640,11 +1629,9 @@ def create_csr(path=None, text=False, **kwargs):
kwargs['private_key_passphrase'] = None
if 'public_key_passphrase' not in kwargs:
kwargs['public_key_passphrase'] = None
- if kwargs['public_key_passphrase'] and not kwargs[
- 'private_key_passphrase']:
+ if kwargs['public_key_passphrase'] and not kwargs['private_key_passphrase']:
kwargs['private_key_passphrase'] = kwargs['public_key_passphrase']
- if kwargs['private_key_passphrase'] and not kwargs[
- 'public_key_passphrase']:
+ if kwargs['private_key_passphrase'] and not kwargs['public_key_passphrase']:
kwargs['public_key_passphrase'] = kwargs['private_key_passphrase']
csr.set_pubkey(get_public_key(kwargs['public_key'],
@@ -1688,18 +1675,10 @@ def create_csr(path=None, text=False, **kwargs):
extstack.push(ext)
csr.add_extensions(extstack)
-
csr.sign(_get_private_key_obj(kwargs['private_key'],
passphrase=kwargs['private_key_passphrase']), kwargs['algorithm'])
- if path:
- return write_pem(
- text=csr.as_pem(),
- path=path,
- pem_type='CERTIFICATE REQUEST'
- )
- else:
- return csr.as_pem()
+ return write_pem(text=csr.as_pem(), path=path, pem_type='CERTIFICATE REQUEST') if path else csr.as_pem()
def verify_private_key(private_key, public_key, passphrase=None):
@@ -1724,8 +1703,7 @@ def verify_private_key(private_key, public_key, passphrase=None):
salt '*' x509.verify_private_key private_key=/etc/pki/myca.key \\
public_key=/etc/pki/myca.crt
'''
- return bool(get_public_key(private_key, passphrase)
- == get_public_key(public_key))
+ return get_public_key(private_key, passphrase) == get_public_key(public_key)
def verify_signature(certificate, signing_pub_key=None,
@@ -1779,9 +1757,8 @@ def verify_crl(crl, cert):
salt '*' x509.verify_crl crl=/etc/pki/myca.crl cert=/etc/pki/myca.crt
'''
if not salt.utils.path.which('openssl'):
- raise salt.exceptions.SaltInvocationError(
- 'openssl binary not found in path'
- )
+ raise salt.exceptions.SaltInvocationError('External command "openssl" not found')
+
crltext = _text_or_file(crl)
crltext = get_pem_entry(crltext, pem_type='X509 CRL')
crltempfile = tempfile.NamedTemporaryFile()
@@ -1802,10 +1779,7 @@ def verify_crl(crl, cert):
crltempfile.close()
certtempfile.close()
- if 'verify OK' in output:
- return True
- else:
- return False
+ return 'verify OK' in output
def expired(certificate):
@@ -1842,8 +1816,9 @@ def expired(certificate):
ret['expired'] = True
else:
ret['expired'] = False
- except ValueError:
- pass
+ except ValueError as err:
+ log.debug('Failed to get data of expired certificate: %s', err)
+ log.trace(err, exc_info=True)
return ret
@@ -1866,6 +1841,7 @@ def will_expire(certificate, days):
salt '*' x509.will_expire "/etc/pki/mycert.crt" days=30
'''
+ ts_pt = "%Y-%m-%d %H:%M:%S"
ret = {}
if os.path.isfile(certificate):
@@ -1875,18 +1851,13 @@ def will_expire(certificate, days):
cert = _get_certificate_obj(certificate)
- _check_time = datetime.datetime.utcnow() + \
- datetime.timedelta(days=days)
+ _check_time = datetime.datetime.utcnow() + datetime.timedelta(days=days)
_expiration_date = cert.get_not_after().get_datetime()
ret['cn'] = _parse_subject(cert.get_subject())['CN']
-
- if _expiration_date.strftime("%Y-%m-%d %H:%M:%S") <= \
- _check_time.strftime("%Y-%m-%d %H:%M:%S"):
- ret['will_expire'] = True
- else:
- ret['will_expire'] = False
- except ValueError:
- pass
+ ret['will_expire'] = _expiration_date.strftime(ts_pt) <= _check_time.strftime(ts_pt)
+ except ValueError as err:
+ log.debug('Unable to return details of a sertificate expiration: %s', err)
+ log.trace(err, exc_info=True)
return ret
diff --git a/salt/states/x509.py b/salt/states/x509.py
index 3774f7d5eb..e4cc288dc9 100644
--- a/salt/states/x509.py
+++ b/salt/states/x509.py
@@ -163,6 +163,7 @@ import copy
# Import Salt Libs
import salt.exceptions
+import salt.utils.stringutils
# Import 3rd-party libs
from salt.ext import six
@@ -170,7 +171,7 @@ from salt.ext import six
try:
from M2Crypto.RSA import RSAError
except ImportError:
- pass
+ RSAError = Exception('RSA Error')
def __virtual__():
@@ -180,7 +181,7 @@ def __virtual__():
if 'x509.get_pem_entry' in __salt__:
return 'x509'
else:
- return (False, 'Could not load x509 state: m2crypto unavailable')
+ return False, 'Could not load x509 state: the x509 is not available'
def _revoked_to_list(revs):
@@ -459,8 +460,10 @@ def certificate_managed(name,
private_key_args['name'], pem_type='RSA PRIVATE KEY')
else:
new_private_key = True
- private_key = __salt__['x509.create_private_key'](text=True, bits=private_key_args['bits'], passphrase=private_key_args[
- 'passphrase'], cipher=private_key_args['cipher'], verbose=private_key_args['verbose'])
+ private_key = __salt__['x509.create_private_key'](text=True, bits=private_key_args['bits'],
+ passphrase=private_key_args['passphrase'],
+ cipher=private_key_args['cipher'],
+ verbose=private_key_args['verbose'])
kwargs['public_key'] = private_key
@@ -671,8 +674,10 @@ def crl_managed(name,
else:
current = '{0} does not exist.'.format(name)
- new_crl = __salt__['x509.create_crl'](text=True, signing_private_key=signing_private_key, signing_private_key_passphrase=signing_private_key_passphrase,
- signing_cert=signing_cert, revoked=revoked, days_valid=days_valid, digest=digest, include_expired=include_expired)
+ new_crl = __salt__['x509.create_crl'](text=True, signing_private_key=signing_private_key,
+ signing_private_key_passphrase=signing_private_key_passphrase,
+ signing_cert=signing_cert, revoked=revoked, days_valid=days_valid,
+ digest=digest, include_expired=include_expired)
new = __salt__['x509.read_crl'](crl=new_crl)
new_comp = new.copy()
@@ -714,6 +719,6 @@ def pem_managed(name,
Any arguments supported by :py:func:`file.managed <salt.states.file.managed>` are supported.
'''
file_args, kwargs = _get_file_args(name, **kwargs)
- file_args['contents'] = __salt__['x509.get_pem_entry'](text=text)
+ file_args['contents'] = salt.utils.stringutils.to_str(__salt__['x509.get_pem_entry'](text=text))
return __states__['file.managed'](**file_args)
--
2.16.4
++++++ xfs-do-not-fails-if-type-is-not-present.patch ++++++
>From 9d1e598bf8c7aff612a58405ad864ba701f022c3 Mon Sep 17 00:00:00 2001
From: Alberto Planas <aplanas(a)gmail.com>
Date: Tue, 11 Jun 2019 17:21:05 +0200
Subject: [PATCH] xfs: do not fails if type is not present
The command `blkid -o export` not always provides a 'TYPE' output
for all the devices. One example is non-formatted partitions, like for
example the BIOS partition.
This patch do not force the presence of this field in the blkid
output.
(cherry picked from commit 88df6963470007aa4fe2adb09f000311f48226a8)
---
salt/modules/xfs.py | 2 +-
tests/unit/modules/test_xfs.py | 50 ++++++++++++++++++++++++++++++++++++++++++
2 files changed, 51 insertions(+), 1 deletion(-)
create mode 100644 tests/unit/modules/test_xfs.py
diff --git a/salt/modules/xfs.py b/salt/modules/xfs.py
index ce7bd187fe..0116d7600e 100644
--- a/salt/modules/xfs.py
+++ b/salt/modules/xfs.py
@@ -329,7 +329,7 @@ def _blkid_output(out):
for items in flt(dev_meta.strip().split("\n")):
key, val = items.split("=", 1)
dev[key.lower()] = val
- if dev.pop("type") == "xfs":
+ if dev.pop("type", None) == "xfs":
dev['label'] = dev.get('label')
data[dev.pop("devname")] = dev
diff --git a/tests/unit/modules/test_xfs.py b/tests/unit/modules/test_xfs.py
new file mode 100644
index 0000000000..4b423d69d1
--- /dev/null
+++ b/tests/unit/modules/test_xfs.py
@@ -0,0 +1,50 @@
+# -*- coding: utf-8 -*-
+
+# Import Python libs
+from __future__ import absolute_import, print_function, unicode_literals
+import textwrap
+
+# Import Salt Testing Libs
+from tests.support.mixins import LoaderModuleMockMixin
+from tests.support.unit import skipIf, TestCase
+from tests.support.mock import (
+ NO_MOCK,
+ NO_MOCK_REASON,
+ MagicMock,
+ patch)
+
+# Import Salt Libs
+import salt.modules.xfs as xfs
+
+
+@skipIf(NO_MOCK, NO_MOCK_REASON)
+@patch('salt.modules.xfs._get_mounts', MagicMock(return_value={}))
+class XFSTestCase(TestCase, LoaderModuleMockMixin):
+ '''
+ Test cases for salt.modules.xfs
+ '''
+ def setup_loader_modules(self):
+ return {xfs: {}}
+
+ def test__blkid_output(self):
+ '''
+ Test xfs._blkid_output when there is data
+ '''
+ blkid_export = textwrap.dedent('''
+ DEVNAME=/dev/sda1
+ UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
+ TYPE=xfs
+ PARTUUID=YYYYYYYY-YY
+
+ DEVNAME=/dev/sdb1
+ PARTUUID=ZZZZZZZZ-ZZZZ-ZZZZ-ZZZZ-ZZZZZZZZZZZZ
+ ''')
+ # We expect to find only data from /dev/sda1, nothig from
+ # /dev/sdb1
+ self.assertEqual(xfs._blkid_output(blkid_export), {
+ '/dev/sda1': {
+ 'label': None,
+ 'partuuid': 'YYYYYYYY-YY',
+ 'uuid': 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'
+ }
+ })
--
2.16.4
++++++ zypperpkg-filter-patterns-that-start-with-dot-243.patch ++++++
>From 9942ee8ef43aae698cc2d6d0a9cd7cfaa9f01ef9 Mon Sep 17 00:00:00 2001
From: Alberto Planas <aplanas(a)suse.com>
Date: Thu, 21 May 2020 10:19:03 +0200
Subject: [PATCH] zypperpkg: filter patterns that start with dot (#243)
For versions <=SLE12SP4 some patterns can contain alias, and can appear
duplicated. The alias start with ".", so they can be filtered.
If the module try to search by the alias name (pattern:.basename, for
example), zypper will not be able to find it and the operation will
fail.
This patch detect and filter the alias, and remove duplicates.
Fix bsc#1171906
(cherry picked from commit d043db63000df2892b2e7259f580ede81e33724d)
---
salt/modules/zypperpkg.py | 10 ++++++++--
tests/unit/modules/test_zypperpkg.py | 22 ++++++++++++++++++++++
2 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py
index ed8420f398b91b3ef76417d2f11ec59c4051d120..96c3eed851b819ec800e733628e2ae255481bb92 100644
--- a/salt/modules/zypperpkg.py
+++ b/salt/modules/zypperpkg.py
@@ -2302,8 +2302,14 @@ def _get_installed_patterns(root=None):
# a real error.
output = __salt__['cmd.run'](cmd, ignore_retcode=True)
- installed_patterns = [_pattern_name(line) for line in output.splitlines()
- if line.startswith('pattern() = ')]
+ # On <= SLE12SP4 we have patterns that have multiple names (alias)
+ # and that are duplicated. The alias start with ".", so we filter
+ # them.
+ installed_patterns = {
+ _pattern_name(line)
+ for line in output.splitlines()
+ if line.startswith("pattern() = ") and not _pattern_name(line).startswith(".")
+ }
patterns = {k: v for k, v in _get_visible_patterns(root=root).items() if v['installed']}
diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py
index 9a5c59a8572cb47c947645ed7c0b5c645c48a909..1fce3352c6aa0b5f19c802831bf8583012feb6bf 100644
--- a/tests/unit/modules/test_zypperpkg.py
+++ b/tests/unit/modules/test_zypperpkg.py
@@ -1493,6 +1493,28 @@ pattern() = package-c'''),
},
}
+ @patch("salt.modules.zypperpkg._get_visible_patterns")
+ def test__get_installed_patterns_with_alias(self, get_visible_patterns):
+ """Test installed patterns in the system if they have alias"""
+ get_visible_patterns.return_value = {
+ "package-a": {"installed": True, "summary": "description a"},
+ "package-b": {"installed": False, "summary": "description b"},
+ }
+
+ salt_mock = {
+ "cmd.run": MagicMock(
+ return_value="""pattern() = .package-a-alias
+pattern() = package-a
+pattern-visible()
+pattern() = package-c"""
+ ),
+ }
+ with patch.dict("salt.modules.zypperpkg.__salt__", salt_mock):
+ assert zypper._get_installed_patterns() == {
+ "package-a": {"installed": True, "summary": "description a"},
+ "package-c": {"installed": True, "summary": "Non-visible pattern"},
+ }
+
@patch('salt.modules.zypperpkg._get_visible_patterns')
def test_list_patterns(self, get_visible_patterns):
'''Test available patterns in the repo'''
--
2.23.0
1
0
Hello community,
here is the log from the commit of package samba for openSUSE:Leap:15.2:Update checked in at 2020-09-01 12:31:10
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Leap:15.2:Update/samba (Old)
and /work/SRC/openSUSE:Leap:15.2:Update/.samba.new.3399 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "samba"
Tue Sep 1 12:31:10 2020 rev:2 rq:830438 version:unknown
Changes:
--------
New Changes file:
NO CHANGES FILE!!!
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ _link ++++++
--- /var/tmp/diff_new_pack.4fozW5/_old 2020-09-01 12:31:11.632248104 +0200
+++ /var/tmp/diff_new_pack.4fozW5/_new 2020-09-01 12:31:11.632248104 +0200
@@ -1 +1 @@
-<link package='samba.13386' cicount='copy' />
+<link package='samba.13815' cicount='copy' />
1
0