commit salt.12491 for openSUSE:Leap:15.1:Update
Hello community, here is the log from the commit of package salt.12491 for openSUSE:Leap:15.1:Update checked in at 2020-04-30 16:39:39 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Comparing /work/SRC/openSUSE:Leap:15.1:Update/salt.12491 (Old) and /work/SRC/openSUSE:Leap:15.1:Update/.salt.12491.new.2738 (New) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Package is "salt.12491" Thu Apr 30 16:39:39 2020 rev:1 rq:799071 version:2019.2.0 Changes: -------- New Changes file: --- /dev/null 2020-04-14 14:47:33.391806949 +0200 +++ /work/SRC/openSUSE:Leap:15.1:Update/.salt.12491.new.2738/salt.changes 2020-04-30 16:39:49.439425180 +0200 @@ -0,0 +1,4466 @@ +------------------------------------------------------------------- +Tue Apr 28 13:10:22 UTC 2020 - Pablo Suárez Hernández <pablo.suarezhernandez@suse.com> + +- Fix CVE-2020-11651 and CVE-2020-11652 (bsc#1170595) + +- Added: + * 0001-Fix-CVE-2020-11651-and-Fix-CVE-2020-11652.patch + +------------------------------------------------------------------- +Thu Mar 5 10:12:14 UTC 2020 - Pablo Suárez Hernández <pablo.suarezhernandez@suse.com> + +- Avoid possible user escalation upgrading salt-master (bsc#1157465) (CVE-2019-18897) +- Fix unit tests failures in test_batch_async tests +- Batch Async: Handle exceptions, properly unregister and close instances after + running async batching to avoid CPU starvation of the MWorkers (bsc#1162327) +- RHEL/CentOS 8 uses platform-python instead of python3 +- New configuration option for selection of grains in the minion start event. +- Fixes tests that where broken due to merge conflict +- Fix 'os_family' grain for Astra Linux Common Edition +- Fix for salt-api NET API where unauthenticated attacker could run + arbitrary code (CVE-2019-17361) (bsc#1162504) +- Adds disabled parameter to mod_repo in aptpkg module + Move token with atomic operation + Bad API token files get deleted (bsc#1160931) +- Support for Btrfs and XFS in parted and mkfs added +- Adds list_downloaded for apt Module to enable pre-downloading support + Adds virt.(pool|network)_get_xml functions +- Various libvirt updates: + * Add virt.pool_capabilities function + * virt.pool_running improvements + * Add virt.pool_deleted state + * virt.network_define allow adding IP configuration +- virt: adding kernel boot parameters to libvirt xml +- Fix to scheduler when data['run'] does not exist (bsc#1159118) +- Fix virt states to not fail on VMs already stopped +- Fix applying of attributes for returner rawfile_json (bsc#1158940) +- xfs: do not fail if type is not present (bsc#1153611) +- Don't use __python indirection macros on spec file + %__python is no longer defined in RPM 4.15 (python2 is going EOL in Jan 2020); + additionally, python/python3 are just binaries in the path. +- Fix errors when running virt.get_hypervisor function +- Align virt.full_info fixes with upstream Salt +- Fix for log checking in x509 test +- Prevent test_mod_del_repo_multiline_values to fail +- Read repo info without using interpolation (bsc#1135656) +- Requires vs BuildRequires +- Limiting M2Crypto to >= SLE15 +- Replacing pycrypto with M2Crypto (bsc#1165425) + +- Added: + * fix-for-log-checking-in-x509-test.patch + * delete-bad-api-token-files.patch + * fix-batch_async-obsolete-test.patch + * batch_async-avoid-using-fnmatch-to-match-event-217.patch + * align-virt-full-info-fixes-with-upstream-192.patch + * add-astra-linux-common-edition-to-the-os-family-list.patch + * list_downloaded-for-apt-module.patch + * fix-virt-states-to-not-fail-on-vms-already-stopped.-.patch + * fix-unit-tests-for-batch-async-after-refactor.patch + * fix-applying-of-attributes-for-returner-rawfile_json.patch + * read-repo-info-without-using-interpolation-bsc-11356.patch + * restrict-the-start_event_grains-only-to-the-start-ev.patch + * xfs-do-not-fails-if-type-is-not-present.patch + * move-tokens-in-place-with-an-atomic-operation.patch + * virt-adding-kernel-boot-parameters-to-libvirt-xml-55.patch + * support-for-btrfs-and-xfs-in-parted-and-mkfs.patch + * add-virt.network_get_xml-function.patch + * virt.network_define-allow-adding-ip-configuration.patch + * fix-schedule.run_job-port-upstream-pr-54799-194.patch + * various-netapi-fixes-and-tests.patch + * batch-async-catch-exceptions-and-safety-unregister-a.patch + * prevent-test_mod_del_repo_multiline_values-to-fail.patch + * fix-virt.get_hypervisor-188.patch + * adds-enabled-kwarg.patch + * enable-passing-grains-to-start-event-based-on-start_.patch + +------------------------------------------------------------------- +Wed Dec 4 15:45:04 UTC 2019 - Pablo Suárez Hernández <pablo.suarezhernandez@suse.com> + +- Let salt-ssh use platform-python on RHEL8 (bsc#1158441) + +- Added: + * let-salt-ssh-use-platform-python-binary-in-rhel8-191.patch + +------------------------------------------------------------------- +Mon Dec 2 15:02:55 UTC 2019 - Pablo Suárez Hernández <pablo.suarezhernandez@suse.com> + +- Fix StreamClosedError issue (bsc#1157479) + +- Added: + * fixing-streamclosed-issue.patch + +------------------------------------------------------------------- +Mon Nov 25 11:29:19 UTC 2019 - Pablo Suárez Hernández <pablo.suarezhernandez@suse.com> + +- Remove virt.pool_delete fast parameter (U#54474) +- Remove unnecessary yield causing BadYieldError (bsc#1154620) +- Prevent 'Already reading' continuous exception message (bsc#1137642) +- Fix for aptpkg test with older mock modules +- Remove wrong tests for core grain and improve debug logging +- Use rich RPM deps to get a compatible version of tornado into the buildroot. +- Accumulated changes from Yomi: + core.py: ignore wrong product_name files + zypperpkg: understand product type +- Enable usage of downloadonly parameter for apt module + +- Added: + * prevent-already-reading-continuous-exception-message.patch + * accumulated-changes-from-yomi-167.patch + * remove-virt.pool_delete-fast-parameter-178.patch + * remove-unnecessary-yield-causing-badyielderror-bsc-1.patch + * fix-a-wrong-rebase-in-test_core.py-180.patch + * adds-the-possibility-to-also-use-downloadonly-in-kwa.patch + * fix-for-older-mock-module.patch + +------------------------------------------------------------------- +Thu Oct 17 13:07:41 UTC 2019 - Pablo Suárez Hernández <pablo.suarezhernandez@suse.com> + +- Add missing 'fun' on events coming from salt-ssh wfunc executions (bsc#1151947) +- Fix failing unit tests for batch async +- Fix memory consumption problem on BatchAsync (bsc#1137642) +- Remove wrong %endif on spec file +- fix dependencies for RHEL 8 +- Differentiating between markupsafe and MarkupSafe for Python3 +- Prevent systemd-run description issue when running aptpkg (bsc#1152366) +- Take checksums arg into account for postgres.datadir_init (bsc#1151650) +- Improve batch_async to release consumed memory (bsc#1140912) +- Require shadow instead of old pwdutils (bsc#1130588) +- Conflict with tornado >= 5; for now we can only cope with Tornado 4.x (boo#1101780). +- Fix virt.full_info (bsc#1146382) +- virt.volume_infos: silence libvirt error message +- virt.volume_infos needs to ignore inactive pools +- Fix for various bugs in virt network and pool states +- Implement network.fqdns module function (bsc#1134860) +- Strip trailing "/" from repo.uri when comparing repos in apktpkg.mod_repo (bsc#1146192) +- Make python3 default for RHEL8 +- Use python3 to build package Salt for RHEL8 +- Fix aptpkg systemd call (bsc#1143301) +- Move server_id deprecation warning to reduce log spamming (bsc#1135567) (bsc#1135732) + +- Added: + * fix-virt.full_info-176.patch + * move-server_id-deprecation-warning-to-reduce-log-spa.patch + * use-current-ioloop-for-the-localclient-instance-of-b.patch + * strip-trailing-from-repo.uri-when-comparing-repos-in.patch + * add-missing-fun-for-returns-from-wfunc-executions.patch + * improve-batch_async-to-release-consumed-memory-bsc-1.patch + * virt.volume_infos-silence-libvirt-error-message-175.patch + * virt.volume_infos-needs-to-ignore-inactive-pools-174.patch + * implement-network.fqdns-module-function-bsc-1134860-.patch + * 2019.2.0-pr-54196-backport-173.patch + * fix-failing-unit-tests-for-batch-async.patch + * fix-aptpkg-systemd-call-bsc-1143301.patch + * prevent-systemd-run-description-issue-when-running-a.patch + * take-checksums-arg-into-account-for-postgres.datadir.patch + +------------------------------------------------------------------- +Mon Sep 16 10:47:06 UTC 2019 - Pablo Suárez Hernández <pablo.suarezhernandez@suse.com> + +- Fix memory leak produced by batch async find_jobs mechanism (bsc#1140912) +- Grant read and execute permission to others (bsc#1150447) + + Since Tomcat is running under the user "tomcat" and it needs to have + read permission there, we have to grant read and execute permissions for + /usr/share/salt-formulas + +- Added: + * fix-memory-leak-produced-by-batch-async-find_jobs-me.patch + +------------------------------------------------------------------- +Mon Sep 2 09:11:22 UTC 2019 - Jochen Breuer <jbreuer@suse.de> + +- Restore default behaviour of pkg list return (bsc#1148714) + +- Added: + * restore-default-behaviour-of-pkg-list-return.patch + +------------------------------------------------------------------- +Tue Jul 30 13:41:02 UTC 2019 - Mihai Dincă <mihai.dinca@suse.com> + +- Multiple fixes on cmdmod, chroot, freezer and zypperpkg needed for Yomi + +cmdmod: fix runas and group in run_chroot +chroot: add missing sys directory +chroot: change variable name to root +chroot: fix bug in safe_kwargs iteration +freezer: do not fail in cache dir is present +freezer: clean freeze YAML profile on restore +zypperpkg: fix pkg.list_pkgs cache +- Avoid traceback on http.query when there are errors with the requested URL (bsc#1128554) +- Salt python client get_full_returns seems return data from incorrect jid (bsc#1131114) +- virt.volume_infos: don't raise an error if there is no VM +- Prevent ansiblegate unit tests to fail on Ubuntu +- Allow passing kwargs to pkg.list_downloaded for Zypper (bsc#1140193) +- Do not make "ansiblegate" module to crash on Python3 minions (bsc#1139761) +- Provide the missing features required for Yomi (Yet one more installer) +- Set 'salt' group for files and directories created by salt-standalone-formulas-configuration package ++++ 4269 more lines (skipped) ++++ between /dev/null ++++ and /work/SRC/openSUSE:Leap:15.1:Update/.salt.12491.new.2738/salt.changes New: ---- 0001-Fix-CVE-2020-11651-and-Fix-CVE-2020-11652.patch 2019.2.0-pr-54196-backport-173.patch README.SUSE _lastrevision _service accumulated-changes-from-yomi-167.patch accumulated-changes-required-for-yomi-165.patch activate-all-beacons-sources-config-pillar-grains.patch add-all_versions-parameter-to-include-all-installed-.patch add-astra-linux-common-edition-to-the-os-family-list.patch add-batch_presence_ping_timeout-and-batch_presence_p.patch add-cpe_name-for-osversion-grain-parsing-u-49946.patch add-custom-suse-capabilities-as-grains.patch add-environment-variable-to-know-if-yum-is-invoked-f.patch add-hold-unhold-functions.patch add-missing-fun-for-returns-from-wfunc-executions.patch add-multi-file-support-and-globbing-to-the-filetree-.patch add-ppc64le-as-a-valid-rpm-package-architecture.patch add-saltssh-multi-version-support-across-python-inte.patch add-standalone-configuration-file-for-enabling-packa.patch add-supportconfig-module-for-remote-calls-and-saltss.patch add-virt.all_capabilities.patch add-virt.network_get_xml-function.patch add-virt.volume_infos-and-virt.volume_delete.patch adds-enabled-kwarg.patch adds-the-possibility-to-also-use-downloadonly-in-kwa.patch align-virt-full-info-fixes-with-upstream-192.patch allow-passing-kwargs-to-pkg.list_downloaded-bsc-1140.patch async-batch-implementation.patch avoid-excessive-syslogging-by-watchdog-cronjob-58.patch avoid-traceback-when-http.query-request-cannot-be-pe.patch azurefs-gracefully-handle-attributeerror.patch batch-async-catch-exceptions-and-safety-unregister-a.patch batch.py-avoid-exception-when-minion-does-not-respon.patch batch_async-avoid-using-fnmatch-to-match-event-217.patch bugfix-any-unicode-string-of-length-16-will-raise-ty.patch calculate-fqdns-in-parallel-to-avoid-blockings-bsc-1.patch checking-for-jid-before-returning-data.patch debian-info_installed-compatibility-50453.patch decide-if-the-source-should-be-actually-skipped.patch delete-bad-api-token-files.patch do-not-break-repo-files-with-multiple-line-values-on.patch do-not-crash-when-there-are-ipv6-established-connect.patch do-not-load-pip-state-if-there-is-no-3rd-party-depen.patch do-not-make-ansiblegate-to-crash-on-python3-minions.patch do-not-report-patches-as-installed-when-not-all-the-.patch don-t-call-zypper-with-more-than-one-no-refresh.patch early-feature-support-config.patch enable-passing-a-unix_socket-for-mysql-returners-bsc.patch enable-passing-grains-to-start-event-based-on-start_.patch fall-back-to-pymysql.patch fix-a-wrong-rebase-in-test_core.py-180.patch fix-applying-of-attributes-for-returner-rawfile_json.patch fix-aptpkg-systemd-call-bsc-1143301.patch fix-async-batch-multiple-done-events.patch fix-async-batch-race-conditions.patch fix-batch_async-obsolete-test.patch fix-bsc-1065792.patch fix-failing-unit-tests-for-batch-async.patch fix-for-log-checking-in-x509-test.patch fix-for-older-mock-module.patch fix-for-suse-expanded-support-detection.patch fix-git_pillar-merging-across-multiple-__env__-repos.patch fix-ipv6-scope-bsc-1108557.patch fix-issue-2068-test.patch fix-memory-leak-produced-by-batch-async-find_jobs-me.patch fix-schedule.run_job-port-upstream-pr-54799-194.patch fix-syndic-start-issue.patch fix-unit-test-for-grains-core.patch fix-unit-tests-for-batch-async-after-refactor.patch fix-virt-states-to-not-fail-on-vms-already-stopped.-.patch fix-virt.full_info-176.patch fix-virt.get_hypervisor-188.patch fix-zypper-pkg.list_pkgs-expectation-and-dpkg-mockin.patch fix-zypper.list_pkgs-to-be-aligned-with-pkg-state.patch fixes-cve-2018-15750-cve-2018-15751.patch fixing-streamclosed-issue.patch get-os_arch-also-without-rpm-package-installed.patch html.tar.bz2 implement-network.fqdns-module-function-bsc-1134860-.patch improve-batch_async-to-release-consumed-memory-bsc-1.patch include-aliases-in-the-fqdns-grains.patch integration-of-msi-authentication-with-azurearm-clou.patch let-salt-ssh-use-platform-python-binary-in-rhel8-191.patch list_downloaded-for-apt-module.patch loosen-azure-sdk-dependencies-in-azurearm-cloud-driv.patch make-aptpkg.list_repos-compatible-on-enabled-disable.patch make-profiles-a-package.patch mount-fix-extra-t-parameter.patch move-server_id-deprecation-warning-to-reduce-log-spa.patch move-tokens-in-place-with-an-atomic-operation.patch preserve-already-defined-destructive_tests-and-expen.patch preserving-signature-in-module.run-state-u-50049.patch prevent-already-reading-continuous-exception-message.patch prevent-ansiblegate-unit-tests-to-fail-on-ubuntu.patch prevent-systemd-run-description-issue-when-running-a.patch prevent-test_mod_del_repo_multiline_values-to-fail.patch provide-the-missing-features-required-for-yomi-yet-o.patch read-repo-info-without-using-interpolation-bsc-11356.patch remove-arch-from-name-when-pkg.list_pkgs-is-called-w.patch remove-unnecessary-yield-causing-badyielderror-bsc-1.patch remove-virt.pool_delete-fast-parameter-178.patch restore-default-behaviour-of-pkg-list-return.patch restrict-the-start_event_grains-only-to-the-start-ev.patch return-the-expected-powerpc-os-arch-bsc-1117995.patch run-salt-api-as-user-salt-bsc-1064520.patch run-salt-master-as-dedicated-salt-user.patch salt-tmpfiles.d salt.changes salt.spec strip-trailing-from-repo.uri-when-comparing-repos-in.patch support-config-non-root-permission-issues-fixes-u-50.patch support-for-btrfs-and-xfs-in-parted-and-mkfs.patch switch-firewalld-state-to-use-change_interface.patch take-checksums-arg-into-account-for-postgres.datadir.patch temporary-fix-extend-the-whitelist-of-allowed-comman.patch travis.yml try-except-undefineflags-as-this-operation-is-not-su.patch update-documentation.sh use-adler32-algorithm-to-compute-string-checksums.patch use-current-ioloop-for-the-localclient-instance-of-b.patch use-threadpool-from-multiprocessing.pool-to-avoid-le.patch v2019.2.0.tar.gz various-netapi-fixes-and-tests.patch virt-1.volume_infos-fix-for-single-vm.patch virt-adding-kernel-boot-parameters-to-libvirt-xml-55.patch virt-handle-whitespaces-in-vm-names.patch virt.network_define-allow-adding-ip-configuration.patch virt.pool_running-fix-pool-start.patch virt.volume_infos-fix-for-single-vm.patch virt.volume_infos-needs-to-ignore-inactive-pools-174.patch virt.volume_infos-silence-libvirt-error-message-175.patch x509-fixes-111.patch xfs-do-not-fails-if-type-is-not-present.patch ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Other differences: ------------------ ++++++ salt.spec ++++++ ++++ 1704 lines (skipped) ++++++ 0001-Fix-CVE-2020-11651-and-Fix-CVE-2020-11652.patch ++++++ ++++ 766 lines (skipped) ++++++ 2019.2.0-pr-54196-backport-173.patch ++++++
From 3119bc27584472b0f0d440a37ec4cff2504165f2 Mon Sep 17 00:00:00 2001 From: Cedric Bosdonnat <cbosdonnat@suse.com> Date: Tue, 3 Sep 2019 15:16:30 +0200 Subject: [PATCH] 2019.2.0 PR 54196 backport (#173)
* virt.network_define doesn't have vport as positional argument virt.network_running state calls virt.network_define with vport as a positional argument resulting in an error at runtime. Fix the state to use the vport named argument instead. * Fix virt.pool_running state documentation virt.pool_running needs the source to be a dictionary, which the documentation was not reflecting. Along the same lines the source hosts need to be a list, adjust the example to show it. * Get virt.pool_running to start the pool after creating it Commit 25b96815 is wrong in assuming the pool build also starts it. The pool needs to be stopped before building it, but we still need to start it after the build: libvirt won't do it automagically for us. * Fix states to match virt.{network,pool}_infos return virt.network_infos and virt.pool_infos return the infos as a dictionary with the network or pool name as a key even when there is only one value. Adapt the network_running and pool_running states to this. * Fix virt.running use of virt.vm_state vm_state return a dictionary with the VM name as a key. Fix virt.running state and its tests to match this. See issue #53107. --- salt/states/virt.py | 26 ++++++++++++++++---------- tests/unit/states/test_virt.py | 27 +++++++++++++++------------ 2 files changed, 31 insertions(+), 22 deletions(-) diff --git a/salt/states/virt.py b/salt/states/virt.py index d411f864cd..32a9e31ae5 100644 --- a/salt/states/virt.py +++ b/salt/states/virt.py @@ -389,8 +389,8 @@ def running(name, try: try: - __salt__['virt.vm_state'](name) - if __salt__['virt.vm_state'](name) != 'running': + domain_state = __salt__['virt.vm_state'](name) + if domain_state.get(name, None) != 'running': action_msg = 'started' if update: status = __salt__['virt.update'](name, @@ -670,7 +670,7 @@ def network_running(name, try: info = __salt__['virt.network_info'](name, connection=connection, username=username, password=password) if info: - if info['active']: + if info[name]['active']: ret['comment'] = 'Network {0} exists and is running'.format(name) else: __salt__['virt.network_start'](name, connection=connection, username=username, password=password) @@ -680,7 +680,7 @@ def network_running(name, __salt__['virt.network_define'](name, bridge, forward, - vport, + vport=vport, tag=tag, autostart=autostart, start=True, @@ -744,11 +744,11 @@ def pool_running(name, - owner: 1000 - group: 100 - source: - - dir: samba_share - - hosts: - one.example.com - two.example.com - - format: cifs + dir: samba_share + hosts: + - one.example.com + - two.example.com + format: cifs - autostart: True ''' @@ -761,7 +761,7 @@ def pool_running(name, try: info = __salt__['virt.pool_info'](name, connection=connection, username=username, password=password) if info: - if info['state'] == 'running': + if info[name]['state'] == 'running': ret['comment'] = 'Pool {0} exists and is running'.format(name) else: __salt__['virt.pool_start'](name, connection=connection, username=username, password=password) @@ -795,6 +795,12 @@ def pool_running(name, connection=connection, username=username, password=password) + + __salt__['virt.pool_start'](name, + connection=connection, + username=username, + password=password) + ret['changes'][name] = 'Pool defined and started' ret['comment'] = 'Pool {0} defined and started'.format(name) except libvirt.libvirtError as err: diff --git a/tests/unit/states/test_virt.py b/tests/unit/states/test_virt.py index 8022989937..2904fa224d 100644 --- a/tests/unit/states/test_virt.py +++ b/tests/unit/states/test_virt.py @@ -229,7 +229,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): 'result': True, 'comment': 'myvm is running'} with patch.dict(virt.__salt__, { # pylint: disable=no-member - 'virt.vm_state': MagicMock(return_value='stopped'), + 'virt.vm_state': MagicMock(return_value={'myvm': 'stopped'}), 'virt.start': MagicMock(return_value=0), }): ret.update({'changes': {'myvm': 'Domain started'}, @@ -322,7 +322,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): password='supersecret') with patch.dict(virt.__salt__, { # pylint: disable=no-member - 'virt.vm_state': MagicMock(return_value='stopped'), + 'virt.vm_state': MagicMock(return_value={'myvm': 'stopped'}), 'virt.start': MagicMock(side_effect=[self.mock_libvirt.libvirtError('libvirt error msg')]) }): ret.update({'changes': {}, 'result': False, 'comment': 'libvirt error msg'}) @@ -330,7 +330,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): # Working update case when running with patch.dict(virt.__salt__, { # pylint: disable=no-member - 'virt.vm_state': MagicMock(return_value='running'), + 'virt.vm_state': MagicMock(return_value={'myvm': 'running'}), 'virt.update': MagicMock(return_value={'definition': True, 'cpu': True}) }): ret.update({'changes': {'myvm': {'definition': True, 'cpu': True}}, @@ -340,7 +340,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): # Working update case when stopped with patch.dict(virt.__salt__, { # pylint: disable=no-member - 'virt.vm_state': MagicMock(return_value='stopped'), + 'virt.vm_state': MagicMock(return_value={'myvm': 'stopped'}), 'virt.start': MagicMock(return_value=0), 'virt.update': MagicMock(return_value={'definition': True}) }): @@ -351,7 +351,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): # Failed live update case with patch.dict(virt.__salt__, { # pylint: disable=no-member - 'virt.vm_state': MagicMock(return_value='running'), + 'virt.vm_state': MagicMock(return_value={'myvm': 'running'}), 'virt.update': MagicMock(return_value={'definition': True, 'cpu': False, 'errors': ['some error']}) }): ret.update({'changes': {'myvm': {'definition': True, 'cpu': False, 'errors': ['some error']}}, @@ -361,7 +361,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): # Failed definition update case with patch.dict(virt.__salt__, { # pylint: disable=no-member - 'virt.vm_state': MagicMock(return_value='running'), + 'virt.vm_state': MagicMock(return_value={'myvm': 'running'}), 'virt.update': MagicMock(side_effect=[self.mock_libvirt.libvirtError('error message')]) }): ret.update({'changes': {}, @@ -573,7 +573,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): define_mock.assert_called_with('mynet', 'br2', 'bridge', - 'openvswitch', + vport='openvswitch', tag=180, autostart=False, start=True, @@ -582,7 +582,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): password='secret') with patch.dict(virt.__salt__, { # pylint: disable=no-member - 'virt.network_info': MagicMock(return_value={'active': True}), + 'virt.network_info': MagicMock(return_value={'mynet': {'active': True}}), 'virt.network_define': define_mock, }): ret.update({'changes': {}, 'comment': 'Network mynet exists and is running'}) @@ -590,7 +590,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): start_mock = MagicMock(return_value=True) with patch.dict(virt.__salt__, { # pylint: disable=no-member - 'virt.network_info': MagicMock(return_value={'active': False}), + 'virt.network_info': MagicMock(return_value={'mynet': {'active': False}}), 'virt.network_start': start_mock, 'virt.network_define': define_mock, }): @@ -666,10 +666,13 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): connection='myconnection', username='user', password='secret') - mocks['start'].assert_not_called() + mocks['start'].assert_called_with('mypool', + connection='myconnection', + username='user', + password='secret') with patch.dict(virt.__salt__, { # pylint: disable=no-member - 'virt.pool_info': MagicMock(return_value={'state': 'running'}), + 'virt.pool_info': MagicMock(return_value={'mypool': {'state': 'running'}}), }): ret.update({'changes': {}, 'comment': 'Pool mypool exists and is running'}) self.assertDictEqual(virt.pool_running('mypool', @@ -680,7 +683,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): for mock in mocks: mocks[mock].reset_mock() with patch.dict(virt.__salt__, { # pylint: disable=no-member - 'virt.pool_info': MagicMock(return_value={'state': 'stopped'}), + 'virt.pool_info': MagicMock(return_value={'mypool': {'state': 'stopped'}}), 'virt.pool_build': mocks['build'], 'virt.pool_start': mocks['start'] }): -- 2.20.1 ++++++ README.SUSE ++++++ Salt-master as non-root user ============================ With this version of salt the salt-master will run as salt user. Why an extra user ================= While the current setup runs the master as root user, this is considered a security issue and not in line with the other configuration management tools (eg. puppet) which runs as a dedicated user. How can I undo the change ========================= If you would like to make the change before you can do the following steps manually: 1. change the user parameter in the master configuration user: root 2. update the file permissions: as root: chown -R root /etc/salt /var/cache/salt /var/log/salt /var/run/salt 3. restart the salt-master daemon: as root: rcsalt-master restart or systemctl restart salt-master NOTE ==== Running the salt-master daemon as a root user is considers by some a security risk, but running as root, enables the pam external auth system, as this system needs root access to check authentication. For more information: http://docs.saltstack.com/en/latest/ref/configuration/nonroot.html++++++ _lastrevision ++++++ c30920a0e09e1c4b12a179967e80ec32b412c5ad++++++ _service ++++++ <services> <service name="tar_scm" mode="disabled"> <param name="url">https://github.com/openSUSE/salt-packaging.git</param> <param name="subdir">salt</param> <param name="filename">package</param> <param name="revision">MU-4.0.5</param> <param name="scm">git</param> </service> <service name="extract_file" mode="disabled"> <param name="archive">*package*.tar</param> <param name="files">*/*</param> </service> <service name="download_url" mode="disabled"> <param name="host">codeload.github.com</param> <param name="path">openSUSE/salt/tar.gz/v2019.2.0-suse</param> <param name="filename">v2019.2.0.tar.gz</param> </service> <service name="update_changelog" mode="disabled"></service> </services> ++++++ accumulated-changes-from-yomi-167.patch ++++++
From 46a60d81604eaf6f9fc3712e02d1066e959c96e3 Mon Sep 17 00:00:00 2001 From: Alberto Planas <aplanas@gmail.com> Date: Tue, 22 Oct 2019 11:02:33 +0200 Subject: [PATCH] Accumulated changes from Yomi (#167)
* core.py: ignore wrong product_name files Some firmwares (like some NUC machines), do not provide valid /sys/class/dmi/id/product_name strings. In those cases an UnicodeDecodeError exception happens. This patch ignore this kind of issue during the grains creation. (cherry picked from commit 2d57d2a6063488ad9329a083219e3826e945aa2d) * zypperpkg: understand product type (cherry picked from commit b865491b74679140f7a71c5ba50d482db47b600f) --- salt/grains/core.py | 4 +++ salt/modules/zypperpkg.py | 30 +++++++++++++------ tests/unit/grains/test_core.py | 45 ++++++++++++++++++++++++++++ tests/unit/modules/test_zypperpkg.py | 26 ++++++++++++++++ 4 files changed, 96 insertions(+), 9 deletions(-) diff --git a/salt/grains/core.py b/salt/grains/core.py index fa188a6ff7..fdabe484a8 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -986,6 +986,10 @@ def _virtual(osdata): grains['virtual'] = 'gce' elif 'BHYVE' in output: grains['virtual'] = 'bhyve' + except UnicodeDecodeError: + # Some firmwares provide non-valid 'product_name' + # files, ignore them + pass except IOError: pass elif osdata['kernel'] == 'FreeBSD': diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py index da1953b2a5..a87041aa70 100644 --- a/salt/modules/zypperpkg.py +++ b/salt/modules/zypperpkg.py @@ -861,23 +861,35 @@ def list_pkgs(versions_as_list=False, root=None, includes=None, **kwargs): _ret[pkgname] = sorted(ret[pkgname], key=lambda d: d['version']) for include in includes: + if include == 'product': + products = list_products(all=False, root=root) + for product in products: + extended_name = '{}:{}'.format(include, product['name']) + _ret[extended_name] = [{ + 'epoch': product['epoch'], + 'version': product['version'], + 'release': product['release'], + 'arch': product['arch'], + 'install_date': None, + 'install_date_time_t': None, + }] if include in ('pattern', 'patch'): if include == 'pattern': - pkgs = list_installed_patterns(root=root) + elements = list_installed_patterns(root=root) elif include == 'patch': - pkgs = list_installed_patches(root=root) + elements = list_installed_patches(root=root) else: - pkgs = [] - for pkg in pkgs: - pkg_extended_name = '{}:{}'.format(include, pkg) - info = info_available(pkg_extended_name, + elements = [] + for element in elements: + extended_name = '{}:{}'.format(include, element) + info = info_available(extended_name, refresh=False, root=root) - _ret[pkg_extended_name] = [{ + _ret[extended_name] = [{ 'epoch': None, - 'version': info[pkg]['version'], + 'version': info[element]['version'], 'release': None, - 'arch': info[pkg]['arch'], + 'arch': info[element]['arch'], 'install_date': None, 'install_date_time_t': None, }] diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py index 889fb90074..aa04a7a7ac 100644 --- a/tests/unit/grains/test_core.py +++ b/tests/unit/grains/test_core.py @@ -1117,6 +1117,51 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin): 'uuid': '' }) + @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') + def test_kernelparams_return(self): + expectations = [ + ('BOOT_IMAGE=/vmlinuz-3.10.0-693.2.2.el7.x86_64', + {'kernelparams': [('BOOT_IMAGE', '/vmlinuz-3.10.0-693.2.2.el7.x86_64')]}), + ('root=/dev/mapper/centos_daemon-root', + {'kernelparams': [('root', '/dev/mapper/centos_daemon-root')]}), + ('rhgb quiet ro', + {'kernelparams': [('rhgb', None), ('quiet', None), ('ro', None)]}), + ('param="value1"', + {'kernelparams': [('param', 'value1')]}), + ('param="value1 value2 value3"', + {'kernelparams': [('param', 'value1 value2 value3')]}), + ('param="value1 value2 value3" LANG="pl" ro', + {'kernelparams': [('param', 'value1 value2 value3'), ('LANG', 'pl'), ('ro', None)]}), + ('ipv6.disable=1', + {'kernelparams': [('ipv6.disable', '1')]}), + ('param="value1:value2:value3"', + {'kernelparams': [('param', 'value1:value2:value3')]}), + ('param="value1,value2,value3"', + {'kernelparams': [('param', 'value1,value2,value3')]}), + ('param="value1" param="value2" param="value3"', + {'kernelparams': [('param', 'value1'), ('param', 'value2'), ('param', 'value3')]}), + ] + + for cmdline, expectation in expectations: + with patch('salt.utils.files.fopen', mock_open(read_data=cmdline)): + self.assertEqual(core.kernelparams(), expectation) + + @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') + @patch('os.path.exists') + @patch('salt.utils.platform.is_proxy') + def test__hw_data_linux_empty(self, is_proxy, exists): + is_proxy.return_value = False + exists.return_value = True + with patch('salt.utils.files.fopen', mock_open(read_data='')): + self.assertEqual(core._hw_data({'kernel': 'Linux'}), { + 'biosreleasedate': '', + 'biosversion': '', + 'manufacturer': '', + 'productname': '', + 'serialnumber': '', + 'uuid': '' + }) + @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') @skipIf(six.PY2, 'UnicodeDecodeError is throw in Python 3') @patch('os.path.exists') diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py index 695d982ca6..7617113401 100644 --- a/tests/unit/modules/test_zypperpkg.py +++ b/tests/unit/modules/test_zypperpkg.py @@ -943,6 +943,32 @@ Repository 'DUMMY' not found by its alias, number, or URI. with self.assertRaisesRegex(CommandExecutionError, '^Advisory id "SUSE-PATCH-XXX" not found$'): zypper.install(advisory_ids=['SUSE-PATCH-XXX']) + @patch('salt.modules.zypperpkg._systemd_scope', + MagicMock(return_value=False)) + @patch('salt.modules.zypperpkg.list_products', + MagicMock(return_value={'openSUSE': {'installed': False, 'summary': 'test'}})) + @patch('salt.modules.zypperpkg.list_pkgs', MagicMock(side_effect=[{"product:openSUSE": "15.2"}, + {"product:openSUSE": "15.3"}])) + def test_install_product_ok(self): + ''' + Test successfully product installation. + ''' + with patch.dict(zypper.__salt__, + { + 'pkg_resource.parse_targets': MagicMock( + return_value=(['product:openSUSE'], None)) + }): + with patch('salt.modules.zypperpkg.__zypper__.noraise.call', MagicMock()) as zypper_mock: + ret = zypper.install('product:openSUSE', includes=['product']) + zypper_mock.assert_called_once_with( + '--no-refresh', + 'install', + '--auto-agree-with-licenses', + '--name', + 'product:openSUSE' + ) + self.assertDictEqual(ret, {"product:openSUSE": {"old": "15.2", "new": "15.3"}}) + def test_remove_purge(self): ''' Test package removal -- 2.23.0 ++++++ accumulated-changes-required-for-yomi-165.patch ++++++
From 8cd87eba73df54a9ede47eda9425e6ffceff7ac0 Mon Sep 17 00:00:00 2001 From: Alberto Planas <aplanas@gmail.com> Date: Tue, 30 Jul 2019 11:23:12 +0200 Subject: [PATCH] Accumulated changes required for Yomi (#165)
* cmdmod: fix runas and group in run_chroot The parameters runas and group for cmdmod.run() will change the efective user and group before executing the command. But in a chroot environment is expected that the change happends inside the chroot, not outside, as the user and groups are refering to objects that can only exist inside the environment. This patch add the userspec parameter to the chroot command, to change the user in the correct place. (cherry picked from commit f0434aaeeee3ace4e3fc65c04e69984f08b2541e) * chroot: add missing sys directory (cherry picked from commit cdf74426bcad4e8bf329bf604c77ea83bfca8b2c) * chroot: change variable name to root (cherry picked from commit 7f68b65b1b0f9eec2a6b07b02714ead0121f0e4b) * chroot: fix bug in safe_kwargs iteration (cherry picked from commit 39da1c69ea2781bed6e9d8e6879b70d65fa5a5b0) * test_cmdmod: fix test_run_cwd_in_combination_with_runas (cherry picked from commit 42640ecf161caf64c61e9b02927882f92c850092) * test_cmdmod: add test_run_chroot_runas test (cherry picked from commit d900035089a22f6741d2095fd1f6694597041a88) * freezer: do not fail in cache dir is present (cherry picked from commit 25137c51e6d6e53e3099b6cddbf51d4cb2c53d8d) * freezer: clean freeze YAML profile on restore (cherry picked from commit 56b97c997257f12038399549dc987b7723ab225f) * zypperpkg: fix pkg.list_pkgs cache The cache from pkg.list_pkgs for the zypper installer is too aggresive. Some parameters will deliver different package lists, like root and includes. The current cache do not take those parameters into consideration, so the next time that this function is called, the last list of packages will be returned, without checking if the current parameters match the old one. This patch create a different cache key for each parameter combination, so the cached data will be separated too. (cherry picked from commit 9c54bb3e8c93ba21fc583bdefbcadbe53cbcd7b5) --- salt/modules/chroot.py | 36 +++++++++------- salt/modules/cmdmod.py | 12 ++++-- salt/modules/freezer.py | 20 ++++++--- salt/modules/zypperpkg.py | 13 ++++-- tests/unit/modules/test_chroot.py | 36 +++++++++++++++- tests/unit/modules/test_cmdmod.py | 50 ++++++++++++++++++++++ tests/unit/modules/test_freezer.py | 62 +++++++++++++++++++++++++--- tests/unit/modules/test_zypperpkg.py | 21 ++++++++++ 8 files changed, 214 insertions(+), 36 deletions(-) diff --git a/salt/modules/chroot.py b/salt/modules/chroot.py index 6e4705b67e..17b5890d8c 100644 --- a/salt/modules/chroot.py +++ b/salt/modules/chroot.py @@ -50,16 +50,17 @@ def __virtual__(): return (False, 'Module chroot requires the command chroot') -def exist(name): +def exist(root): ''' Return True if the chroot environment is present. ''' - dev = os.path.join(name, 'dev') - proc = os.path.join(name, 'proc') - return all(os.path.isdir(i) for i in (name, dev, proc)) + dev = os.path.join(root, 'dev') + proc = os.path.join(root, 'proc') + sys = os.path.join(root, 'sys') + return all(os.path.isdir(i) for i in (root, dev, proc, sys)) -def create(name): +def create(root): ''' Create a basic chroot environment. @@ -67,7 +68,7 @@ def create(name): install the minimal required binaries, including Python if chroot.call is called. - name + root Path to the chroot environment CLI Example: @@ -77,26 +78,28 @@ def create(name): salt myminion chroot.create /chroot ''' - if not exist(name): - dev = os.path.join(name, 'dev') - proc = os.path.join(name, 'proc') + if not exist(root): + dev = os.path.join(root, 'dev') + proc = os.path.join(root, 'proc') + sys = os.path.join(root, 'sys') try: os.makedirs(dev, mode=0o755) os.makedirs(proc, mode=0o555) + os.makedirs(sys, mode=0o555) except OSError as e: log.error('Error when trying to create chroot directories: %s', e) return False return True -def call(name, function, *args, **kwargs): +def call(root, function, *args, **kwargs): ''' Executes a Salt function inside a chroot environment. The chroot does not need to have Salt installed, but Python is required. - name + root Path to the chroot environment function @@ -107,18 +110,19 @@ def call(name, function, *args, **kwargs): .. code-block:: bash salt myminion chroot.call /chroot test.ping + salt myminion chroot.call /chroot ssh.set_auth_key user key=mykey ''' if not function: raise CommandExecutionError('Missing function parameter') - if not exist(name): + if not exist(root): raise CommandExecutionError('Chroot environment not found') # Create a temporary directory inside the chroot where we can # untar salt-thin - thin_dest_path = tempfile.mkdtemp(dir=name) + thin_dest_path = tempfile.mkdtemp(dir=root) thin_path = __utils__['thin.gen_thin']( __opts__['cachedir'], extra_mods=__salt__['config.option']('thin_extra_mods', ''), @@ -130,7 +134,7 @@ def call(name, function, *args, **kwargs): return {'result': False, 'comment': stdout} chroot_path = os.path.join(os.path.sep, - os.path.relpath(thin_dest_path, name)) + os.path.relpath(thin_dest_path, root)) try: safe_kwargs = clean_kwargs(**kwargs) salt_argv = [ @@ -144,8 +148,8 @@ def call(name, function, *args, **kwargs): '-l', 'quiet', '--', function - ] + list(args) + ['{}={}'.format(k, v) for (k, v) in safe_kwargs] - ret = __salt__['cmd.run_chroot'](name, [str(x) for x in salt_argv]) + ] + list(args) + ['{}={}'.format(k, v) for (k, v) in safe_kwargs.items()] + ret = __salt__['cmd.run_chroot'](root, [str(x) for x in salt_argv]) if ret['retcode'] != EX_OK: raise CommandExecutionError(ret['stderr']) diff --git a/salt/modules/cmdmod.py b/salt/modules/cmdmod.py index d0819f2f79..b279d00a11 100644 --- a/salt/modules/cmdmod.py +++ b/salt/modules/cmdmod.py @@ -3064,13 +3064,19 @@ def run_chroot(root, if isinstance(cmd, (list, tuple)): cmd = ' '.join([six.text_type(i) for i in cmd]) - cmd = 'chroot {0} {1} -c {2}'.format(root, sh_, _cmd_quote(cmd)) + + # If runas and group are provided, we expect that the user lives + # inside the chroot, not outside. + if runas: + userspec = '--userspec {}:{}'.format(runas, group if group else '') + else: + userspec = '' + + cmd = 'chroot {} {} {} -c {}'.format(userspec, root, sh_, _cmd_quote(cmd)) run_func = __context__.pop('cmd.run_chroot.func', run_all) ret = run_func(cmd, - runas=runas, - group=group, cwd=cwd, stdin=stdin, shell=shell, diff --git a/salt/modules/freezer.py b/salt/modules/freezer.py index 786dfe4515..85adbfeb82 100644 --- a/salt/modules/freezer.py +++ b/salt/modules/freezer.py @@ -151,7 +151,7 @@ def freeze(name=None, force=False, **kwargs): states_path = _states_path() try: - os.makedirs(states_path) + os.makedirs(states_path, exist_ok=True) except OSError as e: msg = 'Error when trying to create the freezer storage %s: %s' log.error(msg, states_path, e) @@ -163,13 +163,13 @@ def freeze(name=None, force=False, **kwargs): safe_kwargs = clean_kwargs(**kwargs) pkgs = __salt__['pkg.list_pkgs'](**safe_kwargs) repos = __salt__['pkg.list_repos'](**safe_kwargs) - for name, content in zip(_paths(name), (pkgs, repos)): - with fopen(name, 'w') as fp: + for fname, content in zip(_paths(name), (pkgs, repos)): + with fopen(fname, 'w') as fp: json.dump(content, fp) return True -def restore(name=None, **kwargs): +def restore(name=None, clean=False, **kwargs): ''' Make sure that the system contains the packages and repos from a frozen state. @@ -190,6 +190,9 @@ def restore(name=None, **kwargs): name Name of the frozen state. Optional. + clean + In True remove the frozen information YAML from the cache + CLI Example: .. code-block:: bash @@ -203,8 +206,8 @@ def restore(name=None, **kwargs): frozen_pkgs = {} frozen_repos = {} - for name, content in zip(_paths(name), (frozen_pkgs, frozen_repos)): - with fopen(name) as fp: + for fname, content in zip(_paths(name), (frozen_pkgs, frozen_repos)): + with fopen(fname) as fp: content.update(json.load(fp)) # The ordering of removing or adding packages and repos can be @@ -291,4 +294,9 @@ def restore(name=None, **kwargs): log.error(msg, repo, e) res['comment'].append(msg % (repo, e)) + # Clean the cached YAML files + if clean and not res['comment']: + for fname in _paths(name): + os.remove(fname) + return res diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py index 6bc7211f59..f71d6aac9e 100644 --- a/salt/modules/zypperpkg.py +++ b/salt/modules/zypperpkg.py @@ -449,8 +449,14 @@ def _clean_cache(): ''' Clean cached results ''' + keys = [] for cache_name in ['pkg.list_pkgs', 'pkg.list_provides']: - __context__.pop(cache_name, None) + for contextkey in __context__: + if contextkey.startswith(cache_name): + keys.append(contextkey) + + for key in keys: + __context__.pop(key, None) def list_upgrades(refresh=True, root=None, **kwargs): @@ -809,9 +815,10 @@ def list_pkgs(versions_as_list=False, root=None, includes=None, **kwargs): includes = includes if includes else [] - contextkey = 'pkg.list_pkgs' + # Results can be different if a different root or a different + # inclusion types are passed + contextkey = 'pkg.list_pkgs_{}_{}'.format(root, includes) - # TODO(aplanas): this cached value depends on the parameters if contextkey not in __context__: ret = {} cmd = ['rpm'] diff --git a/tests/unit/modules/test_chroot.py b/tests/unit/modules/test_chroot.py index 7181dd7e50..0e65a26606 100644 --- a/tests/unit/modules/test_chroot.py +++ b/tests/unit/modules/test_chroot.py @@ -63,10 +63,10 @@ class ChrootTestCase(TestCase, LoaderModuleMockMixin): ''' Test if the chroot environment exist. ''' - isdir.side_effect = (True, True, True) + isdir.side_effect = (True, True, True, True) self.assertTrue(chroot.exist('/chroot')) - isdir.side_effect = (True, True, False) + isdir.side_effect = (True, True, True, False) self.assertFalse(chroot.exist('/chroot')) @patch('os.makedirs') @@ -182,3 +182,35 @@ class ChrootTestCase(TestCase, LoaderModuleMockMixin): salt_mock['archive.tar'].assert_called_once() salt_mock['cmd.run_chroot'].assert_called_once() utils_mock['files.rm_rf'].assert_called_once() + + @patch('salt.modules.chroot.exist') + @patch('tempfile.mkdtemp') + def test_call_success_parameters(self, mkdtemp, exist): + ''' + Test execution of Salt functions in chroot with parameters. + ''' + # Success test + exist.return_value = True + mkdtemp.return_value = '/chroot/tmp01' + utils_mock = { + 'thin.gen_thin': MagicMock(return_value='/salt-thin.tgz'), + 'files.rm_rf': MagicMock(), + 'json.find_json': MagicMock(return_value={'return': 'result'}) + } + salt_mock = { + 'archive.tar': MagicMock(return_value=''), + 'config.option': MagicMock(), + 'cmd.run_chroot': MagicMock(return_value={ + 'retcode': 0, + 'stdout': '', + }), + } + with patch.dict(chroot.__utils__, utils_mock), \ + patch.dict(chroot.__salt__, salt_mock): + self.assertEqual(chroot.call('/chroot', 'ssh.set_auth_key', + user='user', key='key'), 'result') + utils_mock['thin.gen_thin'].assert_called_once() + salt_mock['config.option'].assert_called() + salt_mock['archive.tar'].assert_called_once() + salt_mock['cmd.run_chroot'].assert_called_once() + utils_mock['files.rm_rf'].assert_called_once() diff --git a/tests/unit/modules/test_cmdmod.py b/tests/unit/modules/test_cmdmod.py index a20afaca0f..6f3964f7aa 100644 --- a/tests/unit/modules/test_cmdmod.py +++ b/tests/unit/modules/test_cmdmod.py @@ -312,6 +312,22 @@ class CMDMODTestCase(TestCase, LoaderModuleMockMixin): else: raise RuntimeError + @skipIf(salt.utils.platform.is_windows(), 'Do not run on Windows') + @skipIf(salt.utils.platform.is_darwin(), 'Do not run on MacOS') + def test_run_cwd_in_combination_with_runas(self): + ''' + cmd.run executes command in the cwd directory + when the runas parameter is specified + ''' + cmd = 'pwd' + cwd = '/tmp' + runas = os.getlogin() + + with patch.dict(cmdmod.__grains__, {'os': 'Darwin', + 'os_family': 'Solaris'}): + stdout = cmdmod._run(cmd, cwd=cwd, runas=runas).get('stdout') + self.assertEqual(stdout, cwd) + def test_run_all_binary_replace(self): ''' Test for failed decoding of binary data, for instance when doing @@ -401,3 +417,37 @@ class CMDMODTestCase(TestCase, LoaderModuleMockMixin): ret = cmdmod.run_all('some command', output_encoding='latin1') self.assertEqual(ret['stdout'], stdout) + + def test_run_chroot_runas(self): + ''' + Test run_chroot when a runas parameter is provided + ''' + with patch.dict(cmdmod.__salt__, {'mount.mount': MagicMock(), + 'mount.umount': MagicMock()}): + with patch('salt.modules.cmdmod.run_all') as run_all_mock: + cmdmod.run_chroot('/mnt', 'ls', runas='foobar') + run_all_mock.assert_called_with( + 'chroot --userspec foobar: /mnt /bin/sh -c ls', + bg=False, + clean_env=False, + cwd=None, + env=None, + ignore_retcode=False, + log_callback=None, + output_encoding=None, + output_loglevel='quiet', + pillar=None, + pillarenv=None, + python_shell=True, + reset_system_locale=True, + rstrip=True, + saltenv='base', + shell='/bin/bash', + stdin=None, + success_retcodes=None, + success_stderr=None, + success_stdout=None, + template=None, + timeout=None, + umask=None, + use_vt=False) diff --git a/tests/unit/modules/test_freezer.py b/tests/unit/modules/test_freezer.py index f6cf2f374f..70d315c17a 100644 --- a/tests/unit/modules/test_freezer.py +++ b/tests/unit/modules/test_freezer.py @@ -112,6 +112,30 @@ class FreezerTestCase(TestCase, LoaderModuleMockMixin): self.assertRaises(CommandExecutionError, freezer.freeze) makedirs.assert_called_once() + @patch('salt.utils.json.dump') + @patch('salt.modules.freezer.fopen') + @patch('salt.modules.freezer.status') + @patch('os.makedirs') + def test_freeze_success_two_freeze(self, makedirs, status, fopen, dump): + ''' + Test to freeze a current installation + ''' + # Freeze the current new state + status.return_value = False + salt_mock = { + 'pkg.list_pkgs': MagicMock(return_value={}), + 'pkg.list_repos': MagicMock(return_value={}), + } + with patch.dict(freezer.__salt__, salt_mock): + self.assertTrue(freezer.freeze('one')) + self.assertTrue(freezer.freeze('two')) + + self.assertEqual(makedirs.call_count, 2) + self.assertEqual(salt_mock['pkg.list_pkgs'].call_count, 2) + self.assertEqual(salt_mock['pkg.list_repos'].call_count, 2) + fopen.assert_called() + dump.assert_called() + @patch('salt.utils.json.dump') @patch('salt.modules.freezer.fopen') @patch('salt.modules.freezer.status') @@ -132,7 +156,7 @@ class FreezerTestCase(TestCase, LoaderModuleMockMixin): salt_mock['pkg.list_pkgs'].assert_called_once() salt_mock['pkg.list_repos'].assert_called_once() fopen.assert_called() - dump.asster_called() + dump.assert_called() @patch('salt.utils.json.dump') @patch('salt.modules.freezer.fopen') @@ -154,7 +178,7 @@ class FreezerTestCase(TestCase, LoaderModuleMockMixin): salt_mock['pkg.list_pkgs'].assert_called_once() salt_mock['pkg.list_repos'].assert_called_once() fopen.assert_called() - dump.asster_called() + dump.assert_called() @patch('salt.modules.freezer.status') def test_restore_fails_missing_state(self, status): @@ -190,7 +214,7 @@ class FreezerTestCase(TestCase, LoaderModuleMockMixin): salt_mock['pkg.list_repos'].assert_called() salt_mock['pkg.mod_repo'].assert_called_once() fopen.assert_called() - load.asster_called() + load.assert_called() @patch('salt.utils.json.load') @patch('salt.modules.freezer.fopen') @@ -217,7 +241,7 @@ class FreezerTestCase(TestCase, LoaderModuleMockMixin): salt_mock['pkg.list_repos'].assert_called() salt_mock['pkg.install'].assert_called_once() fopen.assert_called() - load.asster_called() + load.assert_called() @patch('salt.utils.json.load') @patch('salt.modules.freezer.fopen') @@ -244,7 +268,7 @@ class FreezerTestCase(TestCase, LoaderModuleMockMixin): salt_mock['pkg.list_repos'].assert_called() salt_mock['pkg.remove'].assert_called_once() fopen.assert_called() - load.asster_called() + load.assert_called() @patch('salt.utils.json.load') @patch('salt.modules.freezer.fopen') @@ -271,4 +295,30 @@ class FreezerTestCase(TestCase, LoaderModuleMockMixin): salt_mock['pkg.list_repos'].assert_called() salt_mock['pkg.del_repo'].assert_called_once() fopen.assert_called() - load.asster_called() + load.assert_called() + + @patch('os.remove') + @patch('salt.utils.json.load') + @patch('salt.modules.freezer.fopen') + @patch('salt.modules.freezer.status') + def test_restore_clean_yml(self, status, fopen, load, remove): + ''' + Test to restore an old state + ''' + status.return_value = True + salt_mock = { + 'pkg.list_pkgs': MagicMock(return_value={}), + 'pkg.list_repos': MagicMock(return_value={}), + 'pkg.install': MagicMock(), + } + with patch.dict(freezer.__salt__, salt_mock): + self.assertEqual(freezer.restore(clean=True), { + 'pkgs': {'add': [], 'remove': []}, + 'repos': {'add': [], 'remove': []}, + 'comment': [], + }) + salt_mock['pkg.list_pkgs'].assert_called() + salt_mock['pkg.list_repos'].assert_called() + fopen.assert_called() + load.assert_called() + self.assertEqual(remove.call_count, 2) diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py index 0a3053680f..695d982ca6 100644 --- a/tests/unit/modules/test_zypperpkg.py +++ b/tests/unit/modules/test_zypperpkg.py @@ -570,6 +570,7 @@ Repository 'DUMMY' not found by its alias, number, or URI. patch.dict(zypper.__salt__, {'pkg_resource.stringify': MagicMock()}): pkgs = zypper.list_pkgs(versions_as_list=True) self.assertFalse(pkgs.get('gpg-pubkey', False)) + self.assertTrue('pkg.list_pkgs_None_[]' in zypper.__context__) for pkg_name, pkg_version in { 'jakarta-commons-discovery': ['0.4-129.686'], 'yast2-ftp-server': ['3.1.8-8.1'], @@ -612,6 +613,7 @@ Repository 'DUMMY' not found by its alias, number, or URI. patch.dict(pkg_resource.__salt__, {'pkg.parse_arch_from_name': zypper.parse_arch_from_name}): pkgs = zypper.list_pkgs(attr=['epoch', 'release', 'arch', 'install_date_time_t']) self.assertFalse(pkgs.get('gpg-pubkey', False)) + self.assertTrue('pkg.list_pkgs_None_[]' in zypper.__context__) for pkg_name, pkg_attr in { 'jakarta-commons-discovery': [{ 'version': '0.4', @@ -1455,3 +1457,22 @@ pattern() = package-c'''), 'summary': 'description b', }, } + + def test__clean_cache_empty(self): + '''Test that an empty cached can be cleaned''' + context = {} + with patch.dict(zypper.__context__, context): + zypper._clean_cache() + assert context == {} + + def test__clean_cache_filled(self): + '''Test that a filled cached can be cleaned''' + context = { + 'pkg.list_pkgs_/mnt_[]': None, + 'pkg.list_pkgs_/mnt_[patterns]': None, + 'pkg.list_provides': None, + 'pkg.other_data': None, + } + with patch.dict(zypper.__context__, context): + zypper._clean_cache() + self.assertEqual(zypper.__context__, {'pkg.other_data': None}) -- 2.21.0 ++++++ activate-all-beacons-sources-config-pillar-grains.patch ++++++
From 5b48dee2f1b9a8203490e97620581b3a04d42632 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Tue, 17 Oct 2017 16:52:33 +0200 Subject: [PATCH] Activate all beacons sources: config/pillar/grains
--- salt/minion.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/minion.py b/salt/minion.py index 9468695880..0a6771dccd 100644 --- a/salt/minion.py +++ b/salt/minion.py @@ -439,7 +439,7 @@ class MinionBase(object): the pillar or grains changed ''' if 'config.merge' in functions: - b_conf = functions['config.merge']('beacons', self.opts['beacons'], omit_opts=True) + b_conf = functions['config.merge']('beacons', self.opts['beacons']) if b_conf: return self.beacons.process(b_conf, self.opts['grains']) # pylint: disable=no-member return [] -- 2.13.7 ++++++ add-all_versions-parameter-to-include-all-installed-.patch ++++++
From c059d617a77184c3bec8159d5197355f3cab8c4e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Mon, 14 May 2018 11:33:13 +0100 Subject: [PATCH] Add "all_versions" parameter to include all installed version on rpm.info
Enable "all_versions" parameter for zypper.info_installed Enable "all_versions" parameter for yumpkg.info_installed Prevent adding failed packages when pkg name contains the arch (on SUSE) Add 'all_versions' documentation for info_installed on yum/zypper modules Add unit tests for info_installed with all_versions Refactor: use dict.setdefault instead if-else statement Allow removing only specific package versions with zypper and yum --- salt/states/pkg.py | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/salt/states/pkg.py b/salt/states/pkg.py index 0aca1e0af8..2034262b23 100644 --- a/salt/states/pkg.py +++ b/salt/states/pkg.py @@ -455,6 +455,16 @@ def _find_remove_targets(name=None, if __grains__['os'] == 'FreeBSD' and origin: cver = [k for k, v in six.iteritems(cur_pkgs) if v['origin'] == pkgname] + elif __grains__['os_family'] == 'Suse': + # On SUSE systems. Zypper returns packages without "arch" in name + try: + namepart, archpart = pkgname.rsplit('.', 1) + except ValueError: + cver = cur_pkgs.get(pkgname, []) + else: + if archpart in salt.utils.pkg.rpm.ARCHES + ("noarch",): + pkgname = namepart + cver = cur_pkgs.get(pkgname, []) else: cver = cur_pkgs.get(pkgname, []) @@ -861,6 +871,17 @@ def _verify_install(desired, new_pkgs, ignore_epoch=False, new_caps=None): cver = new_pkgs.get(pkgname.split('%')[0]) elif __grains__['os_family'] == 'Debian': cver = new_pkgs.get(pkgname.split('=')[0]) + elif __grains__['os_family'] == 'Suse': + # On SUSE systems. Zypper returns packages without "arch" in name + try: + namepart, archpart = pkgname.rsplit('.', 1) + except ValueError: + cver = new_pkgs.get(pkgname) + else: + if archpart in salt.utils.pkg.rpm.ARCHES + ("noarch",): + cver = new_pkgs.get(namepart) + else: + cver = new_pkgs.get(pkgname) else: cver = new_pkgs.get(pkgname) if not cver and pkgname in new_caps: -- 2.17.1 ++++++ add-astra-linux-common-edition-to-the-os-family-list.patch ++++++
From c123c299e81bba3c1198a31e561220fbf808f14f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Julio=20Gonz=C3=A1lez=20Gil?= <juliogonzalez@users.noreply.github.com> Date: Wed, 12 Feb 2020 10:05:45 +0100 Subject: [PATCH] Add Astra Linux Common Edition to the OS Family list (#209)
--- salt/grains/core.py | 1 + tests/unit/grains/test_core.py | 20 ++++++++++++++++++++ 2 files changed, 21 insertions(+) diff --git a/salt/grains/core.py b/salt/grains/core.py index bf54c54553..7dc6acac76 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -1476,6 +1476,7 @@ _OS_FAMILY_MAP = { 'Funtoo': 'Gentoo', 'AIX': 'AIX', 'TurnKey': 'Debian', + 'AstraLinuxCE': 'Debian', } # Matches any possible format: diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py index 889fb90074..9f46333009 100644 --- a/tests/unit/grains/test_core.py +++ b/tests/unit/grains/test_core.py @@ -583,6 +583,26 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin): } self._run_os_grains_tests("ubuntu-17.10", _os_release_map, expectation) + @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') + def test_astralinuxce_2_os_grains(self): + ''' + Test if OS grains are parsed correctly in Astra Linux CE 2.12.22 "orel" + ''' + _os_release_map = { + 'linux_distribution': ('AstraLinuxCE', '2.12.22', 'orel'), + } + expectation = { + 'os': 'AstraLinuxCE', + 'os_family': 'Debian', + 'oscodename': 'orel', + 'osfullname': 'AstraLinuxCE', + 'osrelease': '2.12.22', + 'osrelease_info': (2, 12, 22), + 'osmajorrelease': 2, + 'osfinger': 'AstraLinuxCE-2', + } + self._run_os_grains_tests("astralinuxce-2.12.22", _os_release_map, expectation) + @skipIf(not salt.utils.platform.is_windows(), 'System is not Windows') def test_windows_platform_data(self): ''' -- 2.23.0 ++++++ add-batch_presence_ping_timeout-and-batch_presence_p.patch ++++++
From 902a3527415807448be0aa28a651374a189d102c Mon Sep 17 00:00:00 2001 From: Marcelo Chiaradia <mchiaradia@suse.com> Date: Thu, 4 Apr 2019 13:57:38 +0200 Subject: [PATCH] Add 'batch_presence_ping_timeout' and 'batch_presence_ping_gather_job_timeout' parameters for synchronous batching
--- salt/cli/batch.py | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/salt/cli/batch.py b/salt/cli/batch.py index 4bd07f584a..ce239215cb 100644 --- a/salt/cli/batch.py +++ b/salt/cli/batch.py @@ -83,6 +83,9 @@ def batch_get_opts( if key not in opts: opts[key] = val + opts['batch_presence_ping_timeout'] = kwargs.get('batch_presence_ping_timeout', opts['timeout']) + opts['batch_presence_ping_gather_job_timeout'] = kwargs.get('batch_presence_ping_gather_job_timeout', opts['gather_job_timeout']) + return opts @@ -119,7 +122,7 @@ class Batch(object): args = [self.opts['tgt'], 'test.ping', [], - self.opts['timeout'], + self.opts.get('batch_presence_ping_timeout', self.opts['timeout']), ] selected_target_option = self.opts.get('selected_target_option', None) @@ -130,7 +133,7 @@ class Batch(object): self.pub_kwargs['yield_pub_data'] = True ping_gen = self.local.cmd_iter(*args, - gather_job_timeout=self.opts['gather_job_timeout'], + gather_job_timeout=self.opts.get('batch_presence_ping_gather_job_timeout', self.opts['gather_job_timeout']), **self.pub_kwargs) # Broadcast to targets -- 2.20.1 ++++++ add-cpe_name-for-osversion-grain-parsing-u-49946.patch ++++++
From c2c002a2b8f106388fda3c1abaf518f2d47ce1cf Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Tue, 9 Oct 2018 14:08:50 +0200 Subject: [PATCH] Add CPE_NAME for osversion* grain parsing (U#49946)
Remove unnecessary linebreak Override VERSION_ID from os-release, if CPE_NAME is given Add unit test for WFN format of CPE_NAME Add unit test for v2.3 of CPE format Add unit test for broken CPE_NAME Prevent possible crash if CPE_NAME is wrongly written in the distro Add part parsing Keep CPE_NAME only for opensuse series Remove linebreak Expand unit test to verify part name Fix proper part name in the string-bound CPE --- salt/grains/core.py | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) diff --git a/salt/grains/core.py b/salt/grains/core.py index 29e8371c2b..d688b6c757 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -1571,6 +1571,34 @@ def _parse_cpe_name(cpe): return ret +def _parse_cpe_name(cpe): + ''' + Parse CPE_NAME data from the os-release + + Info: https://csrc.nist.gov/projects/security-content-automation-protocol/scap-spe... + + :param cpe: + :return: + ''' + part = { + 'o': 'operating system', + 'h': 'hardware', + 'a': 'application', + } + ret = {} + cpe = (cpe or '').split(':') + if len(cpe) > 4 and cpe[0] == 'cpe': + if cpe[1].startswith('/'): # WFN to URI + ret['vendor'], ret['product'], ret['version'] = cpe[2:5] + ret['phase'] = cpe[5] if len(cpe) > 5 else None + ret['part'] = part.get(cpe[1][1:]) + elif len(cpe) == 13 and cpe[1] == '2.3': # WFN to a string + ret['vendor'], ret['product'], ret['version'], ret['phase'] = [x if x != '*' else None for x in cpe[3:7]] + ret['part'] = part.get(cpe[2]) + + return ret + + def os_data(): ''' Return grains pertaining to the operating system -- 2.17.1 ++++++ add-custom-suse-capabilities-as-grains.patch ++++++
From b02aee33a3aa1676cbfdf3a0ed936eef8a40adfe Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Thu, 21 Jun 2018 11:57:57 +0100 Subject: [PATCH] Add custom SUSE capabilities as Grains
--- salt/grains/extra.py | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/salt/grains/extra.py b/salt/grains/extra.py index fff70e9f5b..4fb58674bf 100644 --- a/salt/grains/extra.py +++ b/salt/grains/extra.py @@ -75,3 +75,10 @@ def config(): log.warning("Bad syntax in grains file! Skipping.") return {} return {} + + +def suse_backported_capabilities(): + return { + '__suse_reserved_pkg_all_versions_support': True, + '__suse_reserved_pkg_patches_support': True + } -- 2.13.7 ++++++ add-environment-variable-to-know-if-yum-is-invoked-f.patch ++++++
From d9d459f62d53acddd67313d9d66e1fe8caf4fd45 Mon Sep 17 00:00:00 2001 From: Marcelo Chiaradia <mchiaradia@suse.com> Date: Thu, 7 Jun 2018 10:29:41 +0200 Subject: [PATCH] Add environment variable to know if yum is invoked from Salt(bsc#1057635)
--- salt/modules/yumpkg.py | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/salt/modules/yumpkg.py b/salt/modules/yumpkg.py index c250b94f0e..a56a2e8366 100644 --- a/salt/modules/yumpkg.py +++ b/salt/modules/yumpkg.py @@ -887,7 +887,8 @@ def list_repo_pkgs(*args, **kwargs): yum_version = None if _yum() != 'yum' else _LooseVersion( __salt__['cmd.run']( ['yum', '--version'], - python_shell=False + python_shell=False, + env={"SALT_RUNNING": '1'} ).splitlines()[0].strip() ) # Really old version of yum; does not even have --showduplicates option @@ -2298,7 +2299,8 @@ def list_holds(pattern=__HOLD_PATTERN, full=True): _check_versionlock() out = __salt__['cmd.run']([_yum(), 'versionlock', 'list'], - python_shell=False) + python_shell=False, + env={"SALT_RUNNING": '1'}) ret = [] for line in salt.utils.itertools.split(out, '\n'): match = _get_hold(line, pattern=pattern, full=full) @@ -2364,7 +2366,8 @@ def group_list(): out = __salt__['cmd.run_stdout']( [_yum(), 'grouplist', 'hidden'], output_loglevel='trace', - python_shell=False + python_shell=False, + env={"SALT_RUNNING": '1'} ) key = None for line in salt.utils.itertools.split(out, '\n'): @@ -2431,7 +2434,8 @@ def group_info(name, expand=False): out = __salt__['cmd.run_stdout']( cmd, output_loglevel='trace', - python_shell=False + python_shell=False, + env={"SALT_RUNNING": '1'} ) g_info = {} @@ -3100,7 +3104,8 @@ def download(*packages): __salt__['cmd.run']( cmd, output_loglevel='trace', - python_shell=False + python_shell=False, + env={"SALT_RUNNING": '1'} ) ret = {} for dld_result in os.listdir(CACHE_DIR): @@ -3175,7 +3180,8 @@ def _get_patches(installed_only=False): cmd = [_yum(), '--quiet', 'updateinfo', 'list', 'all'] ret = __salt__['cmd.run_stdout']( cmd, - python_shell=False + python_shell=False, + env={"SALT_RUNNING": '1'} ) for line in salt.utils.itertools.split(ret, os.linesep): inst, advisory_id, sev, pkg = re.match(r'([i|\s]) ([^\s]+) +([^\s]+) +([^\s]+)', -- 2.17.1 ++++++ add-hold-unhold-functions.patch ++++++
From 4219d3d69799bc20f88eed0a02ef15c932e6782e Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Thu, 6 Dec 2018 16:26:23 +0100 Subject: [PATCH] Add hold/unhold functions
Add unhold function Add warnings --- salt/modules/zypperpkg.py | 88 ++++++++++++++++++++++++++++++++++++++- 1 file changed, 87 insertions(+), 1 deletion(-) diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py index 001b852fc4..0c26e2214c 100644 --- a/salt/modules/zypperpkg.py +++ b/salt/modules/zypperpkg.py @@ -41,6 +41,7 @@ import salt.utils.pkg import salt.utils.pkg.rpm import salt.utils.stringutils import salt.utils.systemd +import salt.utils.versions from salt.utils.versions import LooseVersion import salt.utils.environment from salt.exceptions import CommandExecutionError, MinionError, SaltInvocationError @@ -1742,7 +1743,7 @@ def clean_locks(): return out -def remove_lock(packages, **kwargs): # pylint: disable=unused-argument +def unhold(name=None, pkgs=None, **kwargs): ''' Remove specified package lock. @@ -1754,7 +1755,47 @@ def remove_lock(packages, **kwargs): # pylint: disable=unused-argument salt '*' pkg.remove_lock <package1>,<package2>,<package3> salt '*' pkg.remove_lock pkgs='["foo", "bar"]' ''' + ret = {} + if (not name and not pkgs) or (name and pkgs): + raise CommandExecutionError('Name or packages must be specified.') + elif name: + pkgs = [name] + + locks = list_locks() + try: + pkgs = list(__salt__['pkg_resource.parse_targets'](pkgs)[0].keys()) + except MinionError as exc: + raise CommandExecutionError(exc) + + removed = [] + missing = [] + for pkg in pkgs: + if locks.get(pkg): + removed.append(pkg) + ret[pkg]['comment'] = 'Package {0} is no longer held.'.format(pkg) + else: + missing.append(pkg) + ret[pkg]['comment'] = 'Package {0} unable to be unheld.'.format(pkg) + + if removed: + __zypper__.call('rl', *removed) + + return ret + + +def remove_lock(packages, **kwargs): # pylint: disable=unused-argument + ''' + Remove specified package lock. + + CLI Example: + + .. code-block:: bash + salt '*' pkg.remove_lock <package name> + salt '*' pkg.remove_lock <package1>,<package2>,<package3> + salt '*' pkg.remove_lock pkgs='["foo", "bar"]' + ''' + salt.utils.versions.warn_until('Sodium', 'This function is deprecated. Please use unhold() instead.') locks = list_locks() try: packages = list(__salt__['pkg_resource.parse_targets'](packages)[0].keys()) @@ -1775,6 +1816,50 @@ def remove_lock(packages, **kwargs): # pylint: disable=unused-argument return {'removed': len(removed), 'not_found': missing} +def hold(name=None, pkgs=None, **kwargs): + ''' + Add a package lock. Specify packages to lock by exact name. + + CLI Example: + + .. code-block:: bash + + salt '*' pkg.add_lock <package name> + salt '*' pkg.add_lock <package1>,<package2>,<package3> + salt '*' pkg.add_lock pkgs='["foo", "bar"]' + + :param name: + :param pkgs: + :param kwargs: + :return: + ''' + ret = {} + if (not name and not pkgs) or (name and pkgs): + raise CommandExecutionError('Name or packages must be specified.') + elif name: + pkgs = [name] + + locks = list_locks() + added = [] + try: + pkgs = list(__salt__['pkg_resource.parse_targets'](pkgs)[0].keys()) + except MinionError as exc: + raise CommandExecutionError(exc) + + for pkg in pkgs: + ret[pkg] = {'name': pkg, 'changes': {}, 'result': False, 'comment': ''} + if not locks.get(pkg): + added.append(pkg) + ret[pkg]['comment'] = 'Package {0} is now being held.'.format(pkg) + else: + ret[pkg]['comment'] = 'Package {0} is already set to be held.'.format(pkg) + + if added: + __zypper__.call('al', *added) + + return ret + + def add_lock(packages, **kwargs): # pylint: disable=unused-argument ''' Add a package lock. Specify packages to lock by exact name. @@ -1787,6 +1872,7 @@ def add_lock(packages, **kwargs): # pylint: disable=unused-argument salt '*' pkg.add_lock <package1>,<package2>,<package3> salt '*' pkg.add_lock pkgs='["foo", "bar"]' ''' + salt.utils.versions.warn_until('Sodium', 'This function is deprecated. Please use hold() instead.') locks = list_locks() added = [] try: -- 2.20.1 ++++++ add-missing-fun-for-returns-from-wfunc-executions.patch ++++++
From fa957bcb842a29a340a980a03cd8e54b06e7e21b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Wed, 9 Oct 2019 13:03:33 +0100 Subject: [PATCH] Add missing 'fun' for returns from wfunc executions
--- salt/client/ssh/__init__.py | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/salt/client/ssh/__init__.py b/salt/client/ssh/__init__.py index 1453430e73..0df918d634 100644 --- a/salt/client/ssh/__init__.py +++ b/salt/client/ssh/__init__.py @@ -682,6 +682,8 @@ class SSH(object): data = {'return': data} if 'id' not in data: data['id'] = id_ + if 'fun' not in data: + data['fun'] = fun data['jid'] = jid # make the jid in the payload the same as the jid in the tag self.event.fire_event( data, @@ -797,6 +799,8 @@ class SSH(object): data = {'return': data} if 'id' not in data: data['id'] = id_ + if 'fun' not in data: + data['fun'] = fun data['jid'] = jid # make the jid in the payload the same as the jid in the tag self.event.fire_event( data, -- 2.22.0 ++++++ add-multi-file-support-and-globbing-to-the-filetree-.patch ++++++
From 671bb9d48e120c806ca1f6f176b0ada43b1e7594 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Fri, 12 Oct 2018 16:20:40 +0200 Subject: [PATCH] Add multi-file support and globbing to the filetree (U#50018)
Add more possible logs Support multiple files grabbing Collect system logs and boot logs Support globbing in filetree --- salt/cli/support/intfunc.py | 49 ++++++++++++++++----------- salt/cli/support/profiles/default.yml | 7 ++++ 2 files changed, 37 insertions(+), 19 deletions(-) diff --git a/salt/cli/support/intfunc.py b/salt/cli/support/intfunc.py index 2727cd6394..f15f4d4097 100644 --- a/salt/cli/support/intfunc.py +++ b/salt/cli/support/intfunc.py @@ -6,6 +6,7 @@ Internal functions. from __future__ import absolute_import, print_function, unicode_literals import os +import glob from salt.cli.support.console import MessagesOutput import salt.utils.files @@ -13,7 +14,7 @@ import salt.utils.files out = MessagesOutput() -def filetree(collector, path): +def filetree(collector, *paths): ''' Add all files in the tree. If the "path" is a file, only that file will be added. @@ -21,22 +22,32 @@ def filetree(collector, path): :param path: File or directory :return: ''' - if not path: - out.error('Path not defined', ident=2) - else: - # The filehandler needs to be explicitly passed here, so PyLint needs to accept that. - # pylint: disable=W8470 - if os.path.isfile(path): - filename = os.path.basename(path) - try: - file_ref = salt.utils.files.fopen(path) # pylint: disable=W - out.put('Add {}'.format(filename), indent=2) - collector.add(filename) - collector.link(title=path, path=file_ref) - except Exception as err: - out.error(err, ident=4) - # pylint: enable=W8470 + _paths = [] + # Unglob + for path in paths: + _paths += glob.glob(path) + for path in set(_paths): + if not path: + out.error('Path not defined', ident=2) + elif not os.path.exists(path): + out.warning('Path {} does not exists'.format(path)) else: - for fname in os.listdir(path): - fname = os.path.join(path, fname) - filetree(collector, fname) + # The filehandler needs to be explicitly passed here, so PyLint needs to accept that. + # pylint: disable=W8470 + if os.path.isfile(path): + filename = os.path.basename(path) + try: + file_ref = salt.utils.files.fopen(path) # pylint: disable=W + out.put('Add {}'.format(filename), indent=2) + collector.add(filename) + collector.link(title=path, path=file_ref) + except Exception as err: + out.error(err, ident=4) + # pylint: enable=W8470 + else: + try: + for fname in os.listdir(path): + fname = os.path.join(path, fname) + filetree(collector, [fname]) + except Exception as err: + out.error(err, ident=4) diff --git a/salt/cli/support/profiles/default.yml b/salt/cli/support/profiles/default.yml index 01d9a26193..3defb5eef3 100644 --- a/salt/cli/support/profiles/default.yml +++ b/salt/cli/support/profiles/default.yml @@ -62,10 +62,17 @@ general-health: - ps.top: info: Top CPU consuming processes +boot_log: + - filetree: + info: Collect boot logs + args: + - /var/log/boot.* + system.log: # This works on any file system object. - filetree: info: Add system log args: - /var/log/syslog + - /var/log/messages -- 2.19.0 ++++++ add-ppc64le-as-a-valid-rpm-package-architecture.patch ++++++
From aa9df9a08aa2a761cd91d91376a6a7dfa820c48f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Fri, 24 May 2019 16:27:07 +0100 Subject: [PATCH] Add 'ppc64le' as a valid RPM package architecture
--- salt/utils/pkg/rpm.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/utils/pkg/rpm.py b/salt/utils/pkg/rpm.py index 828b0cecda..cb85eb99fe 100644 --- a/salt/utils/pkg/rpm.py +++ b/salt/utils/pkg/rpm.py @@ -21,7 +21,7 @@ log = logging.getLogger(__name__) # These arches compiled from the rpmUtils.arch python module source ARCHES_64 = ('x86_64', 'athlon', 'amd64', 'ia32e', 'ia64', 'geode') ARCHES_32 = ('i386', 'i486', 'i586', 'i686') -ARCHES_PPC = ('ppc', 'ppc64', 'ppc64iseries', 'ppc64pseries') +ARCHES_PPC = ('ppc', 'ppc64', 'ppc64le', 'ppc64iseries', 'ppc64pseries') ARCHES_S390 = ('s390', 's390x') ARCHES_SPARC = ( 'sparc', 'sparcv8', 'sparcv9', 'sparcv9v', 'sparc64', 'sparc64v' -- 2.17.1 ++++++ add-saltssh-multi-version-support-across-python-inte.patch ++++++
From 18c46c301b98841d941e2d07901e7468de30b83a Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Mon, 12 Mar 2018 12:01:39 +0100 Subject: [PATCH] Add SaltSSH multi-version support across Python interpeters.
Bugfix: crashes when OPTIONS.saltdir is a file salt-ssh: allow server and client to run different python major version Handle non-directory on the /tmp Bugfix: prevent partial fileset removal in /tmp salt-ssh: compare checksums to detect newly generated thin on the server Reset time at thin unpack Bugfix: get a proper option for CLI and opts of wiping the tmp Add docstring to get_tops Remove unnecessary noise in imports Refactor get_tops collector Add logging to the get_tops Update call script Remove pre-caution Update log debug message for tops collector Reset default compression, if unknown is passed Refactor archive creation flow Add external shell-callable function to collect tops Simplify tops gathering, bugfix alternative to Py2 find working executable Add basic shareable module classifier Add proper error handler, unmuting exceptions during top collection Use common shared directory for compatible libraries fix searching for python versions Flatten error message string Bail-out immediately if <2.6 version detected Simplify shell cmd to get the version on Python 2.x Remove stub that was previously moved upfront Lintfix: PEP8 ident Add logging on the error, when Python-2 version cannot be detected properly Generate salt-call source, based on conditions Add logging on remove failure on thin.tgz archive Add config-based external tops gatherer Change signature to pass the extended configuration to the thin generator Update docstring to the salt-call generator Implement get namespaces inclusion to the salt-call script on the client machine Use new signature of the get call Implement namespace selector, based on the current Python interpreter version Add deps as a list, instead of a map Add debug logging Implement packaging an alternative version Update salt-call script so it swaps the namespace according to the configuration Compress thin.zip if zlib is available Fix a system exit error message Move compression fall-back operation Add debug logging prior to the thin archive removal Flatten the archive extension choice Lintfix: PEP8 an empty line required Bugfix: ZFS modules (zfs, zpool) crashes on non-ZFS systems Add unit test case for the Salt SSH parts Add unit test for missing dependencies on get_ext_tops Postpone inheritance implementation Refactor unit test for get_ext_tops Add unit test for get_ext_tops checks interpreter configuration Check python interpreter lock version Add unit test for get_ext_tops checks the python locked interepreter value Bugfix: report into warning log module name, not its config Add unit test for dependencies check python version lock (inherently) Mock os.path.isfile function Update warning logging information Add unit test for get_ext_tops module configuration validation Do not use list of dicts for namespaces, just dict for namespaces. Add unit test for get_ext_tops config verification Fix unit tests for the new config structure Add unit test for thin.gte call Add unit test for dependency path adding function Add unit test for thin_path function Add unit test for salt-call source generator Add unit test for get_ext_namespaces on empty configuration Add get_ext_namespaces for namespace extractions into a tuple for python version Remove unused variable Add unit test for getting namespace failure when python maj/min versions are not defined Add unit test to add tops based on the current interpreter Add unit test for get_tops with extra modules Add unit test for shared object modules top addition Add unit test for thin_sum hashing Add unit test for min_sum hashing Add unit test for gen_thin verify for 2.6 Python version is a minimum requirement Fix gen_thin exception on Python 3 Use object attribute instead of indeces. Remove an empty line. Add unit test for gen_thin compression type fallback Move helper functions up by the class code Update unit test doc Add check for correct archiving mode is opened Add unit test for gen_thin if control files are written correctly Update docstring for fake version info constructor method Add fake tarfile mock handler Mock-out missing methods inside gen_thin Move tarfile.open check to the end of the test Add unit test for tree addition to the archive Add shareable module to the gen_thin unit test Fix docstring Add unit test for an alternative version pack Lintfix Add documentation about updated Salt SSH features Fix typo Lintfix: PEP8 extra-line needed Make the command more readable Write all supported minimal python versions into a config file on the target machine Get supported Python executable based on the config py-map Add unit test for get_supported_py_config function typecheck Add unit test for get_supported_py_config function base tops Add unit test for get_supported_py_config function ext tops Fix unit test for catching "supported-versions" was written down Rephrase Salt SSH doc description Re-phrase docstring for alternative Salt installation require same major version while minor is allowed to be higher Bugfix: remove minor version from the namespaced, version-specific directory Fix unit tests for minor version removal of namespaced version-specific directory Initialise the options directly to be structure-ready object. Disable wiping if state is executed Properly mock a tempfile object Support Python 2.6 versions Add digest collector for file trees etc Bufix: recurse calls damages the configuration (reference problem) Collect digest of the code Get code checksum into the shim options Get all the code content, not just Python sources Bugfix: Python3 compat - string required instead of bytes Lintfix: too many empty lines Lintfix: blocked function used Bugfix: key error master_tops_first Fix unit tests for the checksum generator Use code checksum to update thin archive on client's cache Lintfix Set master_top_first to False by default --- doc/topics/releases/fluorine.rst | 178 +++++++++++++++++++++++++++++++ salt/client/ssh/ssh_py_shim.py | 4 + salt/utils/thin.py | 1 + 3 files changed, 183 insertions(+) create mode 100644 doc/topics/releases/fluorine.rst diff --git a/doc/topics/releases/fluorine.rst b/doc/topics/releases/fluorine.rst new file mode 100644 index 0000000000..40c69e25cc --- /dev/null +++ b/doc/topics/releases/fluorine.rst @@ -0,0 +1,178 @@ +:orphan: + +====================================== +Salt Release Notes - Codename Fluorine +====================================== + + +Minion Startup Events +--------------------- + +When a minion starts up it sends a notification on the event bus with a tag +that looks like this: `salt/minion/<minion_id>/start`. For historical reasons +the minion also sends a similar event with an event tag like this: +`minion_start`. This duplication can cause a lot of clutter on the event bus +when there are many minions. Set `enable_legacy_startup_events: False` in the +minion config to ensure only the `salt/minion/<minion_id>/start` events are +sent. + +The new :conf_minion:`enable_legacy_startup_events` minion config option +defaults to ``True``, but will be set to default to ``False`` beginning with +the Neon release of Salt. + +The Salt Syndic currently sends an old style `syndic_start` event as well. The +syndic respects :conf_minion:`enable_legacy_startup_events` as well. + + +Deprecations +------------ + +Module Deprecations +=================== + +The ``napalm_network`` module had the following changes: + +- Support for the ``template_path`` has been removed in the ``load_template`` + function. This is because support for NAPALM native templates has been + dropped. + +The ``trafficserver`` module had the following changes: + +- Support for the ``match_var`` function was removed. Please use the + ``match_metric`` function instead. +- Support for the ``read_var`` function was removed. Please use the + ``read_config`` function instead. +- Support for the ``set_var`` function was removed. Please use the + ``set_config`` function instead. + +The ``win_update`` module has been removed. It has been replaced by ``win_wua`` +module. + +The ``win_wua`` module had the following changes: + +- Support for the ``download_update`` function has been removed. Please use the + ``download`` function instead. +- Support for the ``download_updates`` function has been removed. Please use the + ``download`` function instead. +- Support for the ``install_update`` function has been removed. Please use the + ``install`` function instead. +- Support for the ``install_updates`` function has been removed. Please use the + ``install`` function instead. +- Support for the ``list_update`` function has been removed. Please use the + ``get`` function instead. +- Support for the ``list_updates`` function has been removed. Please use the + ``list`` function instead. + +Pillar Deprecations +=================== + +The ``vault`` pillar had the following changes: + +- Support for the ``profile`` argument was removed. Any options passed up until + and following the first ``path=`` are discarded. + +Roster Deprecations +=================== + +The ``cache`` roster had the following changes: + +- Support for ``roster_order`` as a list or tuple has been removed. As of the + ``Fluorine`` release, ``roster_order`` must be a dictionary. +- The ``roster_order`` option now includes IPv6 in addition to IPv4 for the + ``private``, ``public``, ``global`` or ``local`` settings. The syntax for these + settings has changed to ``ipv4-*`` or ``ipv6-*``, respectively. + +State Deprecations +================== + +The ``docker`` state has been removed. The following functions should be used +instead. + +- The ``docker.running`` function was removed. Please update applicable SLS files + to use the ``docker_container.running`` function instead. +- The ``docker.stopped`` function was removed. Please update applicable SLS files + to use the ``docker_container.stopped`` function instead. +- The ``docker.absent`` function was removed. Please update applicable SLS files + to use the ``docker_container.absent`` function instead. +- The ``docker.absent`` function was removed. Please update applicable SLS files + to use the ``docker_container.absent`` function instead. +- The ``docker.network_present`` function was removed. Please update applicable + SLS files to use the ``docker_network.present`` function instead. +- The ``docker.network_absent`` function was removed. Please update applicable + SLS files to use the ``docker_network.absent`` function instead. +- The ``docker.image_present`` function was removed. Please update applicable SLS + files to use the ``docker_image.present`` function instead. +- The ``docker.image_absent`` function was removed. Please update applicable SLS + files to use the ``docker_image.absent`` function instead. +- The ``docker.volume_present`` function was removed. Please update applicable SLS + files to use the ``docker_volume.present`` function instead. +- The ``docker.volume_absent`` function was removed. Please update applicable SLS + files to use the ``docker_volume.absent`` function instead. + +The ``docker_network`` state had the following changes: + +- Support for the ``driver`` option has been removed from the ``absent`` function. + This option had no functionality in ``docker_network.absent``. + +The ``git`` state had the following changes: + +- Support for the ``ref`` option in the ``detached`` state has been removed. + Please use the ``rev`` option instead. + +The ``k8s`` state has been removed. The following functions should be used +instead: + +- The ``k8s.label_absent`` function was removed. Please update applicable SLS + files to use the ``kubernetes.node_label_absent`` function instead. +- The ``k8s.label_present`` function was removed. Please updated applicable SLS + files to use the ``kubernetes.node_label_present`` function instead. +- The ``k8s.label_folder_absent`` function was removed. Please update applicable + SLS files to use the ``kubernetes.node_label_folder_absent`` function instead. + +The ``netconfig`` state had the following changes: + +- Support for the ``template_path`` option in the ``managed`` state has been + removed. This is because support for NAPALM native templates has been dropped. + +The ``trafficserver`` state had the following changes: + +- Support for the ``set_var`` function was removed. Please use the ``config`` + function instead. + +The ``win_update`` state has been removed. Please use the ``win_wua`` state instead. + +SaltSSH major updates +===================== + +SaltSSH now works across different major Python versions. Python 2.7 ~ Python 3.x +are now supported transparently. Requirement is, however, that the SaltMaster should +have installed Salt, including all related dependencies for Python 2 and Python 3. +Everything needs to be importable from the respective Python environment. + +SaltSSH can bundle up an arbitrary version of Salt. If there would be an old box for +example, running an outdated and unsupported Python 2.6, it is still possible from +a SaltMaster with Python 3.5 or newer to access it. This feature requires an additional +configuration in /etc/salt/master as follows: + + +.. code-block:: yaml + + ssh_ext_alternatives: + 2016.3: # Namespace, can be actually anything. + py-version: [2, 6] # Constraint to specific interpreter version + path: /opt/2016.3/salt # Main Salt installation + dependencies: # List of dependencies and their installation paths + jinja2: /opt/jinja2 + yaml: /opt/yaml + tornado: /opt/tornado + msgpack: /opt/msgpack + certifi: /opt/certifi + singledispatch: /opt/singledispatch.py + singledispatch_helpers: /opt/singledispatch_helpers.py + markupsafe: /opt/markupsafe + backports_abc: /opt/backports_abc.py + +It is also possible to use several alternative versions of Salt. You can for instance generate +a minimal tarball using runners and include that. But this is only possible, when such specific +Salt version is also available on the Master machine, although does not need to be directly +installed together with the older Python interpreter. diff --git a/salt/client/ssh/ssh_py_shim.py b/salt/client/ssh/ssh_py_shim.py index be17a1a38c..595d1c40c7 100644 --- a/salt/client/ssh/ssh_py_shim.py +++ b/salt/client/ssh/ssh_py_shim.py @@ -164,6 +164,9 @@ def unpack_thin(thin_path): old_umask = os.umask(0o077) # pylint: disable=blacklisted-function tfile.extractall(path=OPTIONS.saltdir) tfile.close() + checksum_path = os.path.normpath(os.path.join(OPTIONS.saltdir, "thin_checksum")) + with open(checksum_path, 'w') as chk: + chk.write(OPTIONS.checksum + '\n') os.umask(old_umask) # pylint: disable=blacklisted-function try: os.unlink(thin_path) @@ -357,5 +360,6 @@ def main(argv): # pylint: disable=W0613 return retcode + if __name__ == '__main__': sys.exit(main(sys.argv)) diff --git a/salt/utils/thin.py b/salt/utils/thin.py index b60815225e..172b0938f5 100644 --- a/salt/utils/thin.py +++ b/salt/utils/thin.py @@ -9,6 +9,7 @@ from __future__ import absolute_import, print_function, unicode_literals import copy import logging import os +import copy import shutil import subprocess import sys -- 2.17.1 ++++++ add-standalone-configuration-file-for-enabling-packa.patch ++++++
From 74160010c0fdddb04980ad664e155550382ef82b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Wed, 22 May 2019 13:00:46 +0100 Subject: [PATCH] Add standalone configuration file for enabling package formulas
--- conf/suse/standalone-formulas-configuration.conf | 4 ++++ 1 file changed, 4 insertions(+) create mode 100644 conf/suse/standalone-formulas-configuration.conf diff --git a/conf/suse/standalone-formulas-configuration.conf b/conf/suse/standalone-formulas-configuration.conf new file mode 100644 index 0000000000..94d05fb2ee --- /dev/null +++ b/conf/suse/standalone-formulas-configuration.conf @@ -0,0 +1,4 @@ +file_roots: + base: + - /usr/share/salt-formulas/states + - /srv/salt -- 2.17.1 ++++++ add-supportconfig-module-for-remote-calls-and-saltss.patch ++++++ ++++ 1405 lines (skipped) ++++++ add-virt.all_capabilities.patch ++++++
From 0fd1e40e7149dd1a33f9a4497fa4e31c78ddfba7 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat@suse.com> Date: Thu, 18 Oct 2018 13:32:59 +0200 Subject: [PATCH] Add virt.all_capabilities
In order to get all possible capabilities from a host, the user has to call virt.capabilities, and then loop over the guests and domains before calling virt.domain_capabilities for each of them. This commit embeds all this logic to get them all in a single virt.all_capabilities call. --- salt/modules/virt.py | 107 +++++++++++++++++++++++--------- tests/unit/modules/test_virt.py | 56 +++++++++++++++++ 2 files changed, 134 insertions(+), 29 deletions(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index b45c5f522d..0921122a8a 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -4094,37 +4094,10 @@ def _parse_caps_loader(node): return result -def domain_capabilities(emulator=None, arch=None, machine=None, domain=None, **kwargs): +def _parse_domain_caps(caps): ''' - Return the domain capabilities given an emulator, architecture, machine or virtualization type. - - .. versionadded:: 2019.2.0 - - :param emulator: return the capabilities for the given emulator binary - :param arch: return the capabilities for the given CPU architecture - :param machine: return the capabilities for the given emulated machine type - :param domain: return the capabilities for the given virtualization type. - :param connection: libvirt connection URI, overriding defaults - :param username: username to connect with, overriding defaults - :param password: password to connect with, overriding defaults - - The list of the possible emulator, arch, machine and domain can be found in - the host capabilities output. - - If none of the parameters is provided the libvirt default domain capabilities - will be returned. - - CLI Example: - - .. code-block:: bash - - salt '*' virt.domain_capabilities arch='x86_64' domain='kvm' - + Parse the XML document of domain capabilities into a structure. ''' - conn = __get_conn(**kwargs) - caps = ElementTree.fromstring(conn.getDomainCapabilities(emulator, arch, machine, domain, 0)) - conn.close() - result = { 'emulator': caps.find('path').text if caps.find('path') is not None else None, 'domain': caps.find('domain').text if caps.find('domain') is not None else None, @@ -4164,6 +4137,82 @@ def domain_capabilities(emulator=None, arch=None, machine=None, domain=None, **k return result +def domain_capabilities(emulator=None, arch=None, machine=None, domain=None, **kwargs): + ''' + Return the domain capabilities given an emulator, architecture, machine or virtualization type. + + .. versionadded:: Fluorine + + :param emulator: return the capabilities for the given emulator binary + :param arch: return the capabilities for the given CPU architecture + :param machine: return the capabilities for the given emulated machine type + :param domain: return the capabilities for the given virtualization type. + :param connection: libvirt connection URI, overriding defaults + :param username: username to connect with, overriding defaults + :param password: password to connect with, overriding defaults + + The list of the possible emulator, arch, machine and domain can be found in + the host capabilities output. + + If none of the parameters is provided, the libvirt default one is returned. + + CLI Example: + + .. code-block:: bash + + salt '*' virt.domain_capabilities arch='x86_64' domain='kvm' + + ''' + conn = __get_conn(**kwargs) + result = [] + try: + caps = ElementTree.fromstring(conn.getDomainCapabilities(emulator, arch, machine, domain, 0)) + result = _parse_domain_caps(caps) + finally: + conn.close() + + return result + + +def all_capabilities(**kwargs): + ''' + Return the host and domain capabilities in a single call. + + .. versionadded:: Neon + + :param connection: libvirt connection URI, overriding defaults + :param username: username to connect with, overriding defaults + :param password: password to connect with, overriding defaults + + CLI Example: + + .. code-block:: bash + + salt '*' virt.all_capabilities + + ''' + conn = __get_conn(**kwargs) + result = {} + try: + host_caps = ElementTree.fromstring(conn.getCapabilities()) + domains = [[(guest.get('arch', {}).get('name', None), key) + for key in guest.get('arch', {}).get('domains', {}).keys()] + for guest in [_parse_caps_guest(guest) for guest in host_caps.findall('guest')]] + flattened = [pair for item in (x for x in domains) for pair in item] + result = { + 'host': { + 'host': _parse_caps_host(host_caps.find('host')), + 'guests': [_parse_caps_guest(guest) for guest in host_caps.findall('guest')] + }, + 'domains': [_parse_domain_caps(ElementTree.fromstring( + conn.getDomainCapabilities(None, arch, None, domain))) + for (arch, domain) in flattened]} + finally: + conn.close() + + return result + + def cpu_baseline(full=False, migratable=False, out='libvirt', **kwargs): ''' Return the optimal 'custom' CPU baseline config for VM's on this minion diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index 3a69adece1..bd34962a6a 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -2204,6 +2204,62 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): self.assertEqual(expected, caps) + def test_all_capabilities(self): + ''' + Test the virt.domain_capabilities default output + ''' + domainXml = ''' +<domainCapabilities> + <path>/usr/bin/qemu-system-x86_64</path> + <domain>kvm</domain> + <machine>virt-2.12</machine> + <arch>x86_64</arch> + <vcpu max='255'/> + <iothreads supported='yes'/> +</domainCapabilities> + ''' + hostXml = ''' +<capabilities> + <host> + <uuid>44454c4c-3400-105a-8033-b3c04f4b344a</uuid> + <cpu> + <arch>x86_64</arch> + <model>Nehalem</model> + <vendor>Intel</vendor> + <microcode version='25'/> + <topology sockets='1' cores='4' threads='2'/> + </cpu> + </host> + <guest> + <os_type>hvm</os_type> + <arch name='x86_64'> + <wordsize>64</wordsize> + <emulator>/usr/bin/qemu-system-x86_64</emulator> + <machine maxCpus='255'>pc-i440fx-2.6</machine> + <machine canonical='pc-i440fx-2.6' maxCpus='255'>pc</machine> + <machine maxCpus='255'>pc-0.12</machine> + <domain type='qemu'/> + <domain type='kvm'> + <emulator>/usr/bin/qemu-kvm</emulator> + <machine maxCpus='255'>pc-i440fx-2.6</machine> + <machine canonical='pc-i440fx-2.6' maxCpus='255'>pc</machine> + <machine maxCpus='255'>pc-0.12</machine> + </domain> + </arch> + </guest> +</capabilities> + ''' + + # pylint: disable=no-member + self.mock_conn.getCapabilities.return_value = hostXml + self.mock_conn.getDomainCapabilities.side_effect = [ + domainXml, domainXml.replace('<domain>kvm', '<domain>qemu')] + # pylint: enable=no-member + + caps = virt.all_capabilities() + self.assertEqual('44454c4c-3400-105a-8033-b3c04f4b344a', caps['host']['host']['uuid']) + self.assertEqual(set(['qemu', 'kvm']), set([domainCaps['domain'] for domainCaps in caps['domains']])) + def test_network_tag(self): ''' Test virt._get_net_xml() with VLAN tag -- 2.20.1 ++++++ add-virt.network_get_xml-function.patch ++++++
From 6701bd208e9acbfee4e55b6b36bd7c80f211b74b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat@suse.com> Date: Mon, 30 Dec 2019 17:28:50 +0100 Subject: [PATCH] Add virt.network_get_xml function
Users may want to see the full XML definition of a network. Add virt.pool_get_xml function Users may want to see the full XML definition of a virtual storage pool. --- salt/modules/virt.py | 48 +++++++++++++++++++++++++++++++++++++++++ tests/unit/modules/test_virt.py | 20 +++++++++++++++++ 2 files changed, 68 insertions(+) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index 44c7e78ef0..339760ead4 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -4633,6 +4633,30 @@ def network_info(name=None, **kwargs): return result +def network_get_xml(name, **kwargs): + ''' + Return the XML definition of a virtual network + + :param name: libvirt network name + :param connection: libvirt connection URI, overriding defaults + :param username: username to connect with, overriding defaults + :param password: password to connect with, overriding defaults + + .. versionadded:: Neon + + CLI Example: + + .. code-block:: bash + + salt '*' virt.network_get_xml default + ''' + conn = __get_conn(**kwargs) + try: + return conn.networkLookupByName(name).XMLDesc() + finally: + conn.close() + + def network_start(name, **kwargs): ''' Start a defined virtual network. @@ -5377,6 +5401,30 @@ def pool_info(name=None, **kwargs): return result +def pool_get_xml(name, **kwargs): + ''' + Return the XML definition of a virtual storage pool + + :param name: libvirt storage pool name + :param connection: libvirt connection URI, overriding defaults + :param username: username to connect with, overriding defaults + :param password: password to connect with, overriding defaults + + .. versionadded:: Neon + + CLI Example: + + .. code-block:: bash + + salt '*' virt.pool_get_xml default + ''' + conn = __get_conn(**kwargs) + try: + return conn.storagePoolLookupByName(name).XMLDesc() + finally: + conn.close() + + def pool_start(name, **kwargs): ''' Start a defined libvirt storage pool. diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index 698e1922fc..719f97a724 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -2404,6 +2404,16 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): net = virt.network_info('foo') self.assertEqual({}, net) + def test_network_get_xml(self): + ''' + Test virt.network_get_xml + ''' + network_mock = MagicMock() + network_mock.XMLDesc.return_value = '<net>Raw XML</net>' + self.mock_conn.networkLookupByName.return_value = network_mock + + self.assertEqual('<net>Raw XML</net>', virt.network_get_xml('default')) + def test_pool(self): ''' Test virt._gen_pool_xml() @@ -2806,6 +2816,16 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): } }, pool) + def test_pool_get_xml(self): + ''' + Test virt.pool_get_xml + ''' + pool_mock = MagicMock() + pool_mock.XMLDesc.return_value = '<pool>Raw XML</pool>' + self.mock_conn.storagePoolLookupByName.return_value = pool_mock + + self.assertEqual('<pool>Raw XML</pool>', virt.pool_get_xml('default')) + def test_pool_list_volumes(self): ''' Test virt.pool_list_volumes -- 2.16.4 ++++++ add-virt.volume_infos-and-virt.volume_delete.patch ++++++
From 5e202207d02d2bf4860cc5487ed19f9d835993d1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat@suse.com> Date: Fri, 15 Feb 2019 17:28:00 +0100 Subject: [PATCH] Add virt.volume_infos() and virt.volume_delete()
Expose more functions to handle libvirt storage volumes. virt.volume_infos() expose informations of the volumes, either for one or all the volumes. Among the provided data, this function exposes the names of the virtual machines using the volumes of file type. virt.volume_delete() allows removing a given volume. --- salt/modules/virt.py | 126 +++++++++++++++++++++ tests/unit/modules/test_virt.py | 195 ++++++++++++++++++++++++++++++++ 2 files changed, 321 insertions(+) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index 0921122a8a..17039444c4 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -4988,3 +4988,129 @@ def pool_list_volumes(name, **kwargs): return pool.listVolumes() finally: conn.close() + + +def _get_storage_vol(conn, pool, vol): + ''' + Helper function getting a storage volume. Will throw a libvirtError + if the pool or the volume couldn't be found. + ''' + pool_obj = conn.storagePoolLookupByName(pool) + return pool_obj.storageVolLookupByName(vol) + + +def _is_valid_volume(vol): + ''' + Checks whether a volume is valid for further use since those may have disappeared since + the last pool refresh. + ''' + try: + # Getting info on an invalid volume raises error + vol.info() + return True + except libvirt.libvirtError as err: + return False + + +def _get_all_volumes_paths(conn): + ''' + Extract the path and backing stores path of all volumes. + + :param conn: libvirt connection to use + ''' + volumes = [vol for l in [obj.listAllVolumes() for obj in conn.listAllStoragePools()] for vol in l] + return {vol.path(): [path.text for path in ElementTree.fromstring(vol.XMLDesc()).findall('.//backingStore/path')] + for vol in volumes if _is_valid_volume(vol)} + + +def volume_infos(pool=None, volume=None, **kwargs): + ''' + Provide details on a storage volume. If no volume name is provided, the infos + all the volumes contained in the pool are provided. If no pool is provided, + the infos of the volumes of all pools are output. + + :param pool: libvirt storage pool name (default: ``None``) + :param volume: name of the volume to get infos from (default: ``None``) + :param connection: libvirt connection URI, overriding defaults + :param username: username to connect with, overriding defaults + :param password: password to connect with, overriding defaults + + .. versionadded:: Neon + + CLI Example: + + .. code-block:: bash + + salt "*" virt.volume_infos <pool> <volume> + ''' + result = {} + conn = __get_conn(**kwargs) + try: + backing_stores = _get_all_volumes_paths(conn) + disks = {domain.name(): + {node.get('file') for node + in ElementTree.fromstring(domain.XMLDesc(0)).findall('.//disk/source/[@file]')} + for domain in _get_domain(conn)} + + def _volume_extract_infos(vol): + ''' + Format the volume info dictionary + + :param vol: the libvirt storage volume object. + ''' + types = ['file', 'block', 'dir', 'network', 'netdir', 'ploop'] + infos = vol.info() + + # If we have a path, check its use. + used_by = [] + if vol.path(): + as_backing_store = {path for (path, all_paths) in backing_stores.items() if vol.path() in all_paths} + used_by = [vm_name for (vm_name, vm_disks) in disks.items() + if vm_disks & as_backing_store or vol.path() in vm_disks] + + return { + 'type': types[infos[0]] if infos[0] < len(types) else 'unknown', + 'key': vol.key(), + 'path': vol.path(), + 'capacity': infos[1], + 'allocation': infos[2], + 'used_by': used_by, + } + + pools = [obj for obj in conn.listAllStoragePools() if pool is None or obj.name() == pool] + vols = {pool_obj.name(): {vol.name(): _volume_extract_infos(vol) + for vol in pool_obj.listAllVolumes() + if (volume is None or vol.name() == volume) and _is_valid_volume(vol)} + for pool_obj in pools} + return {pool_name: volumes for (pool_name, volumes) in vols.items() if volumes} + except libvirt.libvirtError as err: + log.debug('Silenced libvirt error: %s', str(err)) + finally: + conn.close() + return result + + +def volume_delete(pool, volume, **kwargs): + ''' + Delete a libvirt managed volume. + + :param pool: libvirt storage pool name + :param volume: name of the volume to delete + :param connection: libvirt connection URI, overriding defaults + :param username: username to connect with, overriding defaults + :param password: password to connect with, overriding defaults + + .. versionadded:: Neon + + CLI Example: + + .. code-block:: bash + + salt "*" virt.volume_delete <pool> <volume> + ''' + conn = __get_conn(**kwargs) + try: + vol = _get_storage_vol(conn, pool, volume) + return not bool(vol.delete()) + finally: + conn.close() diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index bd34962a6a..14e51e1e2a 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -2698,3 +2698,198 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): self.mock_conn.storagePoolLookupByName.return_value = mock_pool # pylint: enable=no-member self.assertEqual(names, virt.pool_list_volumes('default')) + + def test_volume_infos(self): + ''' + Test virt.volume_infos + ''' + vms_disks = [ + ''' + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2'/> + <source file='/path/to/vol0.qcow2'/> + <target dev='vda' bus='virtio'/> + </disk> + ''', + ''' + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2'/> + <source file='/path/to/vol3.qcow2'/> + <target dev='vda' bus='virtio'/> + </disk> + ''', + ''' + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2'/> + <source file='/path/to/vol2.qcow2'/> + <target dev='vda' bus='virtio'/> + </disk> + ''' + ] + mock_vms = [] + for idx, disk in enumerate(vms_disks): + vm = MagicMock() + # pylint: disable=no-member + vm.name.return_value = 'vm{0}'.format(idx) + vm.XMLDesc.return_value = ''' + <domain type='kvm' id='1'> + <name>vm{0}</name> + <devices>{1}</devices> + </domain> + '''.format(idx, disk) + # pylint: enable=no-member + mock_vms.append(vm) + + mock_pool_data = [ + { + 'name': 'pool0', + 'volumes': [ + { + 'key': '/key/of/vol0', + 'name': 'vol0', + 'path': '/path/to/vol0.qcow2', + 'info': [0, 123456789, 123456], + 'backingStore': None + } + ] + }, + { + 'name': 'pool1', + 'volumes': [ + { + 'key': '/key/of/vol0bad', + 'name': 'vol0bad', + 'path': '/path/to/vol0bad.qcow2', + 'info': None, + 'backingStore': None + }, + { + 'key': '/key/of/vol1', + 'name': 'vol1', + 'path': '/path/to/vol1.qcow2', + 'info': [0, 12345, 1234], + 'backingStore': None + }, + { + 'key': '/key/of/vol2', + 'name': 'vol2', + 'path': '/path/to/vol2.qcow2', + 'info': [0, 12345, 1234], + 'backingStore': '/path/to/vol0.qcow2' + }, + ], + } + ] + mock_pools = [] + for pool_data in mock_pool_data: + mock_pool = MagicMock() + mock_pool.name.return_value = pool_data['name'] # pylint: disable=no-member + mock_volumes = [] + for vol_data in pool_data['volumes']: + mock_volume = MagicMock() + # pylint: disable=no-member + mock_volume.name.return_value = vol_data['name'] + mock_volume.key.return_value = vol_data['key'] + mock_volume.path.return_value = '/path/to/{0}.qcow2'.format(vol_data['name']) + if vol_data['info']: + mock_volume.info.return_value = vol_data['info'] + backing_store = ''' + <backingStore> + <format>qcow2</format> + <path>{0}</path> + </backingStore> + '''.format(vol_data['backingStore']) if vol_data['backingStore'] else '<backingStore/>' + mock_volume.XMLDesc.return_value = ''' + <volume type='file'> + <name>{0}</name> + <target> + <format>qcow2</format> + <path>/path/to/{0}.qcow2</path> + </target> + {1} + </volume> + '''.format(vol_data['name'], backing_store) + else: + mock_volume.info.side_effect = self.mock_libvirt.libvirtError('No such volume') + mock_volume.XMLDesc.side_effect = self.mock_libvirt.libvirtError('No such volume') + mock_volumes.append(mock_volume) + # pylint: enable=no-member + mock_pool.listAllVolumes.return_value = mock_volumes # pylint: disable=no-member + mock_pools.append(mock_pool) + + self.mock_conn.listAllStoragePools.return_value = mock_pools # pylint: disable=no-member + + with patch('salt.modules.virt._get_domain', MagicMock(return_value=mock_vms)): + actual = virt.volume_infos('pool0', 'vol0') + self.assertEqual(1, len(actual.keys())) + self.assertEqual(1, len(actual['pool0'].keys())) + self.assertEqual(['vm0', 'vm2'], sorted(actual['pool0']['vol0']['used_by'])) + self.assertEqual('/path/to/vol0.qcow2', actual['pool0']['vol0']['path']) + self.assertEqual('file', actual['pool0']['vol0']['type']) + self.assertEqual('/key/of/vol0', actual['pool0']['vol0']['key']) + self.assertEqual(123456789, actual['pool0']['vol0']['capacity']) + self.assertEqual(123456, actual['pool0']['vol0']['allocation']) + + self.assertEqual(virt.volume_infos('pool1', None), { + 'pool1': { + 'vol1': { + 'type': 'file', + 'key': '/key/of/vol1', + 'path': '/path/to/vol1.qcow2', + 'capacity': 12345, + 'allocation': 1234, + 'used_by': [], + }, + 'vol2': { + 'type': 'file', + 'key': '/key/of/vol2', + 'path': '/path/to/vol2.qcow2', + 'capacity': 12345, + 'allocation': 1234, + 'used_by': ['vm2'], + } + } + }) + + self.assertEqual(virt.volume_infos(None, 'vol2'), { + 'pool1': { + 'vol2': { + 'type': 'file', + 'key': '/key/of/vol2', + 'path': '/path/to/vol2.qcow2', + 'capacity': 12345, + 'allocation': 1234, + 'used_by': ['vm2'], + } + } + }) + + def test_volume_delete(self): + ''' + Test virt.volume_delete + ''' + mock_delete = MagicMock(side_effect=[0, 1]) + mock_volume = MagicMock() + mock_volume.delete = mock_delete # pylint: disable=no-member + mock_pool = MagicMock() + # pylint: disable=no-member + mock_pool.storageVolLookupByName.side_effect = [ + mock_volume, + mock_volume, + self.mock_libvirt.libvirtError("Missing volume"), + mock_volume, + ] + self.mock_conn.storagePoolLookupByName.side_effect = [ + mock_pool, + mock_pool, + mock_pool, + self.mock_libvirt.libvirtError("Missing pool"), + ] + + # pylint: enable=no-member + self.assertTrue(virt.volume_delete('default', 'test_volume')) + self.assertFalse(virt.volume_delete('default', 'test_volume')) + with self.assertRaises(self.mock_libvirt.libvirtError): + virt.volume_delete('default', 'missing') + virt.volume_delete('missing', 'test_volume') + self.assertEqual(mock_delete.call_count, 2) -- 2.20.1 ++++++ adds-enabled-kwarg.patch ++++++
From d0ebd2b6b6bb93ae151b78a19154870624b16c71 Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Tue, 5 Nov 2019 13:59:12 +0100 Subject: [PATCH] Adds enabled kwarg
aptpkg is using the keyword argument 'disabled', while zypper and yum are using enabled. This change allows to also pass 'disabled' to mod_repo from the aptpkg module. --- salt/modules/aptpkg.py | 2 ++ tests/unit/modules/test_aptpkg.py | 24 ++++++++++++++++++++++++ 2 files changed, 26 insertions(+) diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py index 023049b2af..009cfb2c5b 100644 --- a/salt/modules/aptpkg.py +++ b/salt/modules/aptpkg.py @@ -2353,6 +2353,8 @@ def mod_repo(repo, saltenv='base', **kwargs): if 'disabled' in kwargs: kwargs['disabled'] = salt.utils.data.is_true(kwargs['disabled']) + elif 'enabled' in kwargs: + kwargs['disabled'] = not salt.utils.data.is_true(kwargs['enabled']) kw_type = kwargs.get('type') kw_dist = kwargs.get('dist') diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py index 5c7e38eae7..d5e3d765b1 100644 --- a/tests/unit/modules/test_aptpkg.py +++ b/tests/unit/modules/test_aptpkg.py @@ -548,6 +548,30 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin): self.assert_called_once(refresh_mock) refresh_mock.reset_mock() + def test_mod_repo_enabled(self): + ''' + Checks if a repo is enabled or disabled depending on the passed kwargs. + ''' + with patch.dict(aptpkg.__salt__, {'config.option': MagicMock(), 'no_proxy': MagicMock(return_value=False)}): + with patch('salt.modules.aptpkg._check_apt', MagicMock(return_value=True)): + with patch('salt.modules.aptpkg.refresh_db', MagicMock(return_value={})): + with patch('salt.utils.data.is_true', MagicMock(return_value=True)) as data_is_true: + with patch('salt.modules.aptpkg.sourceslist', MagicMock(), create=True): + repo = aptpkg.mod_repo('foo', enabled=False) + data_is_true.assert_called_with(False) + # with disabled=True; should call salt.utils.data.is_true True + data_is_true.reset_mock() + repo = aptpkg.mod_repo('foo', disabled=True) + data_is_true.assert_called_with(True) + # with enabled=True; should call salt.utils.data.is_true with False + data_is_true.reset_mock() + repo = aptpkg.mod_repo('foo', enabled=True) + data_is_true.assert_called_with(True) + # with disabled=True; should call salt.utils.data.is_true False + data_is_true.reset_mock() + repo = aptpkg.mod_repo('foo', disabled=False) + data_is_true.assert_called_with(False) + @patch('salt.utils.path.os_walk', MagicMock(return_value=[('test', 'test', 'test')])) @patch('os.path.getsize', MagicMock(return_value=123456)) @patch('os.path.getctime', MagicMock(return_value=1234567890.123456)) -- 2.16.4 ++++++ adds-the-possibility-to-also-use-downloadonly-in-kwa.patch ++++++
From f9e7ace2f7c56a7fb4df60a048131dbd6887340b Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Fri, 27 Sep 2019 11:33:47 +0200 Subject: [PATCH] Adds the possibility to also use downloadonly in kwargs
The download_only parameter in the apt module is not in line with the yum and zypper modules. Both of them use downloadonly without the underline. With this change apt now additionally supports the downloadonly parameter. Fixes #54790 --- salt/modules/aptpkg.py | 7 ++++--- tests/unit/modules/test_aptpkg.py | 30 ++++++++++++++++++++++++++++++ 2 files changed, 34 insertions(+), 3 deletions(-) diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py index a11bb51c16..1a60255a1d 100644 --- a/salt/modules/aptpkg.py +++ b/salt/modules/aptpkg.py @@ -1054,8 +1054,9 @@ def upgrade(refresh=True, dist_upgrade=False, **kwargs): Skip refreshing the package database if refresh has already occurred within <value> seconds - download_only - Only download the packages, don't unpack or install them + download_only (or downloadonly) + Only download the packages, don't unpack or install them. Use + downloadonly to be in line with yum and zypper module. .. versionadded:: 2018.3.0 @@ -1086,7 +1087,7 @@ def upgrade(refresh=True, dist_upgrade=False, **kwargs): cmd.append('--force-yes') if kwargs.get('skip_verify', False): cmd.append('--allow-unauthenticated') - if kwargs.get('download_only', False): + if kwargs.get('download_only', False) or kwargs.get('downloadonly', False): cmd.append('--download-only') cmd.append('dist-upgrade' if dist_upgrade else 'upgrade') diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py index 85360da181..d3fac5902a 100644 --- a/tests/unit/modules/test_aptpkg.py +++ b/tests/unit/modules/test_aptpkg.py @@ -393,6 +393,36 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin): with patch.multiple(aptpkg, **patch_kwargs): self.assertEqual(aptpkg.upgrade(), dict()) + def test_upgrade_downloadonly(self): + ''' + Tests the download-only options for upgrade. + ''' + with patch('salt.utils.pkg.clear_rtag', MagicMock()): + with patch('salt.modules.aptpkg.list_pkgs', + MagicMock(return_value=UNINSTALL)): + mock_cmd = MagicMock(return_value={ + 'retcode': 0, + 'stdout': UPGRADE + }) + patch_kwargs = { + '__salt__': { + 'config.get': MagicMock(return_value=True), + 'cmd.run_all': mock_cmd + }, + } + with patch.multiple(aptpkg, **patch_kwargs): + aptpkg.upgrade() + args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args.args if "--download-only" in args] + self.assertFalse(any(args_matching)) + + aptpkg.upgrade(downloadonly=True) + args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args.args if "--download-only" in args] + self.assertTrue(any(args_matching)) + + aptpkg.upgrade(download_only=True) + args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args.args if "--download-only" in args] + self.assertTrue(any(args_matching)) + def test_show(self): ''' Test that the pkg.show function properly parses apt-cache show output. -- 2.16.4 ++++++ align-virt-full-info-fixes-with-upstream-192.patch ++++++
From 87c5cd60e1b25732ad622e4d7e44760b4c818042 Mon Sep 17 00:00:00 2001 From: Cedric Bosdonnat <cbosdonnat@suse.com> Date: Mon, 9 Dec 2019 17:27:41 +0100 Subject: [PATCH] Align virt full info fixes with upstream (#192)
* Porting PR #52574 to 2019.2.1 * Partly revert 4ce0bc544174fdb00482db4653fb4b0ef411e78b to match upstream's fix --- salt/modules/virt.py | 12 +++++++----- tests/unit/modules/test_virt.py | 23 ++++++++++++++++++++++- 2 files changed, 29 insertions(+), 6 deletions(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index 3abc140a00..5e26964449 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -331,7 +331,7 @@ def _get_uuid(dom): salt '*' virt.get_uuid <domain> ''' - return ElementTree.fromstring(dom.XMLDesc(0)).find('uuid').text + return ElementTree.fromstring(get_xml(dom)).find('uuid').text def _get_on_poweroff(dom): @@ -344,7 +344,7 @@ def _get_on_poweroff(dom): salt '*' virt.get_on_restart <domain> ''' - node = ElementTree.fromstring(dom.XMLDesc(0)).find('on_poweroff') + node = ElementTree.fromstring(get_xml(dom)).find('on_poweroff') return node.text if node is not None else '' @@ -358,7 +358,7 @@ def _get_on_reboot(dom): salt '*' virt.get_on_reboot <domain> ''' - node = ElementTree.fromstring(dom.XMLDesc(0)).find('on_reboot') + node = ElementTree.fromstring(get_xml(dom)).find('on_reboot') return node.text if node is not None else '' @@ -372,7 +372,7 @@ def _get_on_crash(dom): salt '*' virt.get_on_crash <domain> ''' - node = ElementTree.fromstring(dom.XMLDesc(0)).find('on_crash') + node = ElementTree.fromstring(get_xml(dom)).find('on_crash') return node.text if node is not None else '' @@ -2435,7 +2435,9 @@ def get_xml(vm_, **kwargs): salt '*' virt.get_xml <domain> ''' conn = __get_conn(**kwargs) - xml_desc = _get_domain(conn, vm_).XMLDesc(0) + xml_desc = vm_.XMLDesc(0) if isinstance( + vm_, libvirt.virDomain + ) else _get_domain(conn, vm_).XMLDesc(0) conn.close() return xml_desc diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index b95f51807f..d8efafc063 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -38,6 +38,10 @@ class LibvirtMock(MagicMock): # pylint: disable=too-many-ancestors ''' Libvirt library mock ''' + class virDomain(MagicMock): + ''' + virDomain mock + ''' class libvirtError(Exception): ''' @@ -76,7 +80,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): Define VM to use in tests ''' self.mock_conn.listDefinedDomains.return_value = [name] # pylint: disable=no-member - mock_domain = MagicMock() + mock_domain = self.mock_libvirt.virDomain() self.mock_conn.lookupByName.return_value = mock_domain # pylint: disable=no-member mock_domain.XMLDesc.return_value = xml # pylint: disable=no-member @@ -1396,6 +1400,23 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): re.match('^([0-9A-F]{2}[:-]){5}([0-9A-F]{2})$', interface_attrs['mac'], re.I)) + def test_get_xml(self): + ''' + Test virt.get_xml() + ''' + xml = '''<domain type='kvm' id='7'> + <name>test-vm</name> + <devices> + <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'> + <listen type='address' address='0.0.0.0'/> + </graphics> + </devices> + </domain> + ''' + domain = self.set_mock_vm("test-vm", xml) + self.assertEqual(xml, virt.get_xml('test-vm')) + self.assertEqual(xml, virt.get_xml(domain)) + def test_parse_qemu_img_info(self): ''' Make sure that qemu-img info output is properly parsed -- 2.23.0 ++++++ allow-passing-kwargs-to-pkg.list_downloaded-bsc-1140.patch ++++++
From 9e2139213bc2eeb8afbf10fdff663ebe7ed23887 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Wed, 3 Jul 2019 09:34:50 +0100 Subject: [PATCH] Allow passing kwargs to pkg.list_downloaded (bsc#1140193)
Add unit test for pkg.list_downloaded with kwargs --- salt/modules/zypperpkg.py | 2 +- tests/unit/modules/test_zypperpkg.py | 27 +++++++++++++++++++++++++++ 2 files changed, 28 insertions(+), 1 deletion(-) diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py index 9d0407e674..6bc7211f59 100644 --- a/salt/modules/zypperpkg.py +++ b/salt/modules/zypperpkg.py @@ -2553,7 +2553,7 @@ def download(*packages, **kwargs): ) -def list_downloaded(root=None): +def list_downloaded(root=None, **kwargs): ''' .. versionadded:: 2017.7.0 diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py index d2ae06a98e..0a3053680f 100644 --- a/tests/unit/modules/test_zypperpkg.py +++ b/tests/unit/modules/test_zypperpkg.py @@ -766,6 +766,33 @@ Repository 'DUMMY' not found by its alias, number, or URI. self.assertEqual(len(list_patches), 3) self.assertDictEqual(list_patches, PATCHES_RET) + @patch('salt.utils.path.os_walk', MagicMock(return_value=[('test', 'test', 'test')])) + @patch('os.path.getsize', MagicMock(return_value=123456)) + @patch('os.path.getctime', MagicMock(return_value=1234567890.123456)) + @patch('fnmatch.filter', MagicMock(return_value=['/var/cache/zypper/packages/foo/bar/test_package.rpm'])) + def test_list_downloaded_with_kwargs(self): + ''' + Test downloaded packages listing. + + :return: + ''' + DOWNLOADED_RET = { + 'test-package': { + '1.0': { + 'path': '/var/cache/zypper/packages/foo/bar/test_package.rpm', + 'size': 123456, + 'creation_date_time_t': 1234567890, + 'creation_date_time': '2009-02-13T23:31:30', + } + } + } + + with patch.dict(zypper.__salt__, {'lowpkg.bin_pkg_info': MagicMock(return_value={'name': 'test-package', + 'version': '1.0'})}): + list_downloaded = zypper.list_downloaded(kw1=True, kw2=False) + self.assertEqual(len(list_downloaded), 1) + self.assertDictEqual(list_downloaded, DOWNLOADED_RET) + @patch('salt.utils.path.os_walk', MagicMock(return_value=[('test', 'test', 'test')])) @patch('os.path.getsize', MagicMock(return_value=123456)) @patch('os.path.getctime', MagicMock(return_value=1234567890.123456)) -- 2.21.0 ++++++ async-batch-implementation.patch ++++++ ++++ 960 lines (skipped) ++++++ avoid-excessive-syslogging-by-watchdog-cronjob-58.patch ++++++
From 310f8eb22db6010ba48ab371a7223c1345cfbcf0 Mon Sep 17 00:00:00 2001 From: Hubert Mantel <mantel@suse.de> Date: Mon, 27 Nov 2017 13:55:13 +0100 Subject: [PATCH] avoid excessive syslogging by watchdog cronjob (#58)
--- pkg/suse/salt-minion | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pkg/suse/salt-minion b/pkg/suse/salt-minion index 2e418094ed..73a91ebd62 100755 --- a/pkg/suse/salt-minion +++ b/pkg/suse/salt-minion @@ -55,7 +55,7 @@ WATCHDOG_CRON="/etc/cron.d/salt-minion" set_watchdog() { if [ ! -f $WATCHDOG_CRON ]; then - echo -e '* * * * * root /usr/bin/salt-daemon-watcher --with-init\n' > $WATCHDOG_CRON + echo -e '-* * * * * root /usr/bin/salt-daemon-watcher --with-init\n' > $WATCHDOG_CRON # Kick the watcher for 1 minute immediately, because cron will wake up only afterwards /usr/bin/salt-daemon-watcher --with-init & disown fi -- 2.13.7 ++++++ avoid-traceback-when-http.query-request-cannot-be-pe.patch ++++++
From 36433f3f81fb45ff40ed2d294494342c9f622c2e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Mon, 29 Jul 2019 11:17:53 +0100 Subject: [PATCH] Avoid traceback when http.query request cannot be performed (bsc#1128554)
Improve error logging when http.query cannot be performed --- salt/utils/http.py | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/salt/utils/http.py b/salt/utils/http.py index 78043b1d6c..1b7dff6202 100644 --- a/salt/utils/http.py +++ b/salt/utils/http.py @@ -567,11 +567,13 @@ def query(url, except tornado.httpclient.HTTPError as exc: ret['status'] = exc.code ret['error'] = six.text_type(exc) + log.error("Cannot perform 'http.query': {0} - {1}".format(url_full, ret['error'])) return ret - except socket.gaierror as exc: + except (socket.herror, socket.error, socket.timeout, socket.gaierror) as exc: if status is True: ret['status'] = 0 ret['error'] = six.text_type(exc) + log.error("Cannot perform 'http.query': {0} - {1}".format(url_full, ret['error'])) return ret if stream is True or handle is True: -- 2.21.0 ++++++ azurefs-gracefully-handle-attributeerror.patch ++++++
From d914a1e952e393f3e72aee2cb8d9056533f490cc Mon Sep 17 00:00:00 2001 From: Robert Munteanu <rombert@apache.org> Date: Mon, 19 Nov 2018 17:52:34 +0100 Subject: [PATCH] azurefs: gracefully handle AttributeError
It is possible that the azure.storage object has no __version__ defined. In that case, prevent console spam with unhandled AttributeError messages and instead consider that Azure support is not present. Problem was encountered on openSUSE Tumbleweed. --- salt/fileserver/azurefs.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/fileserver/azurefs.py b/salt/fileserver/azurefs.py index 547a681016..032739d160 100644 --- a/salt/fileserver/azurefs.py +++ b/salt/fileserver/azurefs.py @@ -68,7 +68,7 @@ try: if LooseVersion(azure.storage.__version__) < LooseVersion('0.20.0'): raise ImportError('azure.storage.__version__ must be >= 0.20.0') HAS_AZURE = True -except ImportError: +except (ImportError, AttributeError): HAS_AZURE = False # Import third party libs -- 2.17.1 ++++++ batch-async-catch-exceptions-and-safety-unregister-a.patch ++++++
From 8a23030d347b7487328c0395f5e30ef29daf1455 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Fri, 28 Feb 2020 15:11:53 +0000 Subject: [PATCH] Batch Async: Catch exceptions and safety unregister and close instances
--- salt/cli/batch_async.py | 156 +++++++++++++++++++++++----------------- 1 file changed, 89 insertions(+), 67 deletions(-) diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py index da069b64bd..b8f272ed67 100644 --- a/salt/cli/batch_async.py +++ b/salt/cli/batch_async.py @@ -13,7 +13,6 @@ import salt.client # pylint: enable=import-error,no-name-in-module,redefined-builtin import logging -import fnmatch log = logging.getLogger(__name__) @@ -104,22 +103,25 @@ class BatchAsync(object): def __event_handler(self, raw): if not self.event: return - mtag, data = self.event.unpack(raw, self.event.serial) - for (pattern, op) in self.patterns: - if mtag.startswith(pattern[:-1]): - minion = data['id'] - if op == 'ping_return': - self.minions.add(minion) - if self.targeted_minions == self.minions: - self.event.io_loop.spawn_callback(self.start_batch) - elif op == 'find_job_return': - if data.get("return", None): - self.find_job_returned.add(minion) - elif op == 'batch_run': - if minion in self.active: - self.active.remove(minion) - self.done_minions.add(minion) - self.event.io_loop.spawn_callback(self.schedule_next) + try: + mtag, data = self.event.unpack(raw, self.event.serial) + for (pattern, op) in self.patterns: + if mtag.startswith(pattern[:-1]): + minion = data['id'] + if op == 'ping_return': + self.minions.add(minion) + if self.targeted_minions == self.minions: + self.event.io_loop.spawn_callback(self.start_batch) + elif op == 'find_job_return': + if data.get("return", None): + self.find_job_returned.add(minion) + elif op == 'batch_run': + if minion in self.active: + self.active.remove(minion) + self.done_minions.add(minion) + self.event.io_loop.spawn_callback(self.schedule_next) + except Exception as ex: + log.error("Exception occured while processing event: {}".format(ex)) def _get_next(self): to_run = self.minions.difference( @@ -146,54 +148,59 @@ class BatchAsync(object): if timedout_minions: self.schedule_next() - if running: + if self.event and running: self.find_job_returned = self.find_job_returned.difference(running) self.event.io_loop.spawn_callback(self.find_job, running) @tornado.gen.coroutine def find_job(self, minions): - not_done = minions.difference(self.done_minions).difference(self.timedout_minions) - - if not_done: - jid = self.jid_gen() - find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid) - self.patterns.add((find_job_return_pattern, "find_job_return")) - self.event.subscribe(find_job_return_pattern, match_type='glob') - - ret = yield self.local.run_job_async( - not_done, - 'saltutil.find_job', - [self.batch_jid], - 'list', - gather_job_timeout=self.opts['gather_job_timeout'], - jid=jid, - **self.eauth) - yield tornado.gen.sleep(self.opts['gather_job_timeout']) - self.event.io_loop.spawn_callback( - self.check_find_job, - not_done, - jid) + if self.event: + not_done = minions.difference(self.done_minions).difference(self.timedout_minions) + try: + if not_done: + jid = self.jid_gen() + find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid) + self.patterns.add((find_job_return_pattern, "find_job_return")) + self.event.subscribe(find_job_return_pattern, match_type='glob') + ret = yield self.local.run_job_async( + not_done, + 'saltutil.find_job', + [self.batch_jid], + 'list', + gather_job_timeout=self.opts['gather_job_timeout'], + jid=jid, + **self.eauth) + yield tornado.gen.sleep(self.opts['gather_job_timeout']) + if self.event: + self.event.io_loop.spawn_callback( + self.check_find_job, + not_done, + jid) + except Exception as ex: + log.error("Exception occured handling batch async: {}. Aborting execution.".format(ex)) + self.close_safe() @tornado.gen.coroutine def start(self): - self.__set_event_handler() - ping_return = yield self.local.run_job_async( - self.opts['tgt'], - 'test.ping', - [], - self.opts.get( - 'selected_target_option', - self.opts.get('tgt_type', 'glob') - ), - gather_job_timeout=self.opts['gather_job_timeout'], - jid=self.ping_jid, - metadata=self.metadata, - **self.eauth) - self.targeted_minions = set(ping_return['minions']) - #start batching even if not all minions respond to ping - yield tornado.gen.sleep(self.batch_presence_ping_timeout or self.opts['gather_job_timeout']) - self.event.io_loop.spawn_callback(self.start_batch) - + if self.event: + self.__set_event_handler() + ping_return = yield self.local.run_job_async( + self.opts['tgt'], + 'test.ping', + [], + self.opts.get( + 'selected_target_option', + self.opts.get('tgt_type', 'glob') + ), + gather_job_timeout=self.opts['gather_job_timeout'], + jid=self.ping_jid, + metadata=self.metadata, + **self.eauth) + self.targeted_minions = set(ping_return['minions']) + #start batching even if not all minions respond to ping + yield tornado.gen.sleep(self.batch_presence_ping_timeout or self.opts['gather_job_timeout']) + if self.event: + self.event.io_loop.spawn_callback(self.start_batch) @tornado.gen.coroutine def start_batch(self): @@ -206,7 +213,8 @@ class BatchAsync(object): "metadata": self.metadata } ret = self.event.fire_event(data, "salt/batch/{0}/start".format(self.batch_jid)) - self.event.io_loop.spawn_callback(self.run_next) + if self.event: + self.event.io_loop.spawn_callback(self.run_next) @tornado.gen.coroutine def end_batch(self): @@ -221,11 +229,21 @@ class BatchAsync(object): "metadata": self.metadata } self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid)) - for (pattern, label) in self.patterns: - if label in ["ping_return", "batch_run"]: - self.event.unsubscribe(pattern, match_type='glob') - del self - gc.collect() + + # release to the IOLoop to allow the event to be published + # before closing batch async execution + yield tornado.gen.sleep(1) + self.close_safe() + + def close_safe(self): + for (pattern, label) in self.patterns: + self.event.unsubscribe(pattern, match_type='glob') + self.event.remove_event_handler(self.__event_handler) + self.event = None + self.local = None + self.ioloop = None + del self + gc.collect() @tornado.gen.coroutine def schedule_next(self): @@ -233,7 +251,8 @@ class BatchAsync(object): self.scheduled = True # call later so that we maybe gather more returns yield tornado.gen.sleep(self.batch_delay) - self.event.io_loop.spawn_callback(self.run_next) + if self.event: + self.event.io_loop.spawn_callback(self.run_next) @tornado.gen.coroutine def run_next(self): @@ -254,17 +273,20 @@ class BatchAsync(object): metadata=self.metadata) yield tornado.gen.sleep(self.opts['timeout']) - self.event.io_loop.spawn_callback(self.find_job, set(next_batch)) + + # The batch can be done already at this point, which means no self.event + if self.event: + self.event.io_loop.spawn_callback(self.find_job, set(next_batch)) except Exception as ex: - log.error("Error in scheduling next batch: %s", ex) + log.error("Error in scheduling next batch: %s. Aborting execution", ex) self.active = self.active.difference(next_batch) + self.close_safe() else: yield self.end_batch() gc.collect() def __del__(self): self.local = None - self.event.remove_event_handler(self.__event_handler) self.event = None self.ioloop = None gc.collect() -- 2.23.0 ++++++ batch.py-avoid-exception-when-minion-does-not-respon.patch ++++++
From 50377852ca989ffa141fcf32d5ca57d120b455b8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Jos=C3=A9=20Guilherme=20Vanz?= <jvanz@jvanz.com> Date: Tue, 21 May 2019 16:13:18 -0300 Subject: [PATCH] batch.py: avoid exception when minion does not respond (bsc#1135507) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit
We have several issues reporting that salt is throwing exception when the minion does not respond. This change avoid the exception adding a default data to the minion when it fails to respond. This patch based on the patch suggested by @roskens. Issues #46876 #48509 #50238 bsc#1135507 Signed-off-by: José Guilherme Vanz <jguilhermevanz@suse.com> --- salt/cli/batch.py | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/salt/cli/batch.py b/salt/cli/batch.py index ce239215cb..1623fc5be8 100644 --- a/salt/cli/batch.py +++ b/salt/cli/batch.py @@ -315,6 +315,11 @@ class Batch(object): if self.opts.get('failhard') and data['ret']['retcode'] > 0: failhard = True + # avoid an exception if the minion does not respond. + if data.get("failed") is True: + log.debug('Minion %s failed to respond: data=%s', minion, data) + data = {'ret': 'Minion did not return. [Failed]', 'retcode': salt.defaults.exitcodes.EX_GENERIC} + if self.opts.get('raw'): ret[minion] = data yield data -- 2.21.0 ++++++ batch_async-avoid-using-fnmatch-to-match-event-217.patch ++++++
From ae3d4495e8bb9438b941444dba04bedaef9d8c1a Mon Sep 17 00:00:00 2001 From: Silvio Moioli <smoioli@suse.de> Date: Mon, 2 Mar 2020 11:23:59 +0100 Subject: [PATCH] batch_async: avoid using fnmatch to match event (#217)
--- salt/cli/batch_async.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py index c4545e3ebc..da069b64bd 100644 --- a/salt/cli/batch_async.py +++ b/salt/cli/batch_async.py @@ -106,7 +106,7 @@ class BatchAsync(object): return mtag, data = self.event.unpack(raw, self.event.serial) for (pattern, op) in self.patterns: - if fnmatch.fnmatch(mtag, pattern): + if mtag.startswith(pattern[:-1]): minion = data['id'] if op == 'ping_return': self.minions.add(minion) -- 2.23.0 ++++++ bugfix-any-unicode-string-of-length-16-will-raise-ty.patch ++++++
From 8fc3419db49497ca33f99d7bbc3a251d7b07ff09 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Fri, 5 Oct 2018 12:02:08 +0200 Subject: [PATCH] Bugfix: any unicode string of length 16 will raise TypeError instead of ValueError
--- salt/_compat.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/_compat.py b/salt/_compat.py index 8628833dcf..98931c6cce 100644 --- a/salt/_compat.py +++ b/salt/_compat.py @@ -191,7 +191,7 @@ class IPv6AddressScoped(ipaddress.IPv6Address): if isinstance(data, bytes) and len(data) == 16 and b':' not in data: try: packed = bool(int(str(bytearray(data)).encode('hex'), 16)) - except ValueError: + except (ValueError, TypeError): pass return packed -- 2.20.1 ++++++ calculate-fqdns-in-parallel-to-avoid-blockings-bsc-1.patch ++++++
From 722b9395a6489da7626e6a388c78bf8e8812190e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Fri, 12 Apr 2019 16:47:03 +0100 Subject: [PATCH] Calculate FQDNs in parallel to avoid blockings (bsc#1129079)
Fix pylint issue --- salt/grains/core.py | 31 ++++++++++++++++++++++++++----- 1 file changed, 26 insertions(+), 5 deletions(-) diff --git a/salt/grains/core.py b/salt/grains/core.py index 05a9d5035d..796458939d 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -20,11 +20,14 @@ import platform import logging import locale import uuid +import time import zlib from errno import EACCES, EPERM import datetime import warnings +from multiprocessing.dummy import Pool as ThreadPool + # pylint: disable=import-error try: import dateutil.tz @@ -2200,13 +2203,10 @@ def fqdns(): grains = {} fqdns = set() - addresses = salt.utils.network.ip_addrs(include_loopback=False, interface_data=_get_interfaces()) - addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, interface_data=_get_interfaces())) - err_message = 'Exception during resolving address: %s' - for ip in addresses: + def _lookup_fqdn(ip): try: name, aliaslist, addresslist = socket.gethostbyaddr(ip) - fqdns.update([socket.getfqdn(name)] + [als for als in aliaslist if salt.utils.network.is_fqdn(als)]) + return [socket.getfqdn(name)] + [als for als in aliaslist if salt.utils.network.is_fqdn(als)] except socket.herror as err: if err.errno == 0: # No FQDN for this IP address, so we don't need to know this all the time. @@ -2216,6 +2216,27 @@ def fqdns(): except (socket.error, socket.gaierror, socket.timeout) as err: log.error(err_message, err) + start = time.time() + + addresses = salt.utils.network.ip_addrs(include_loopback=False, interface_data=_get_interfaces()) + addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, interface_data=_get_interfaces())) + err_message = 'Exception during resolving address: %s' + + # Create a ThreadPool to process the underlying calls to 'socket.gethostbyaddr' in parallel. + # This avoid blocking the execution when the "fqdn" is not defined for certains IP addresses, which was causing + # that "socket.timeout" was reached multiple times secuencially, blocking execution for several seconds. + pool = ThreadPool(8) + results = pool.map(_lookup_fqdn, addresses) + pool.close() + pool.join() + + for item in results: + if item: + fqdns.update(item) + + elapsed = time.time() - start + log.debug('Elapsed time getting FQDNs: {} seconds'.format(elapsed)) + return {"fqdns": sorted(list(fqdns))} -- 2.17.1 ++++++ checking-for-jid-before-returning-data.patch ++++++
From 8ced9cdeb53e7dc20a1665ba2e373fbdc5d30c5d Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Tue, 9 Apr 2019 16:32:46 +0200 Subject: [PATCH] Checking for jid before returning data
Seems raw can have returns for the same minion, but an other job. In order to not return resutls from the wrong job, we need to check for the jid. --- salt/client/__init__.py | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/salt/client/__init__.py b/salt/client/__init__.py index 8b37422cbf..aff354a021 100644 --- a/salt/client/__init__.py +++ b/salt/client/__init__.py @@ -1560,8 +1560,12 @@ class LocalClient(object): if 'minions' in raw.get('data', {}): continue try: - found.add(raw['id']) - ret = {raw['id']: {'ret': raw['return']}} + # There might be two jobs for the same minion, so we have to check for the jid + if jid == raw['jid']: + found.add(raw['id']) + ret = {raw['id']: {'ret': raw['return']}} + else: + continue except KeyError: # Ignore other erroneous messages continue -- 2.22.0 ++++++ debian-info_installed-compatibility-50453.patch ++++++
From afdfd35222223d81c304854b5ae7af60f3820ed3 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Tue, 20 Nov 2018 16:06:31 +0100 Subject: [PATCH] Debian info_installed compatibility (#50453)
Remove unused variable Get unit ticks installation time Pass on unix ticks installation date time Implement function to figure out package build time Unify arch attribute Add 'attr' support. Use attr parameter in aptpkg Add 'all_versions' output structure backward compatibility Fix docstring Add UT for generic test of function 'info' Add UT for 'info' function with the parameter 'attr' Add UT for info_installed's 'attr' param Fix docstring Add returned type check Add UT for info_installed with 'all_versions=True' output structure Refactor UT for 'owner' function Refactor UT: move to decorators, add more checks Schedule TODO for next refactoring of UT 'show' function Refactor UT: get rid of old assertion way, flatten tests Refactor UT: move to native assertions, cleanup noise, flatten complexity for better visibility what is tested Lintfix: too many empty lines Adjust architecture getter according to the lowpkg info Fix wrong Git merge: missing function signature --- salt/modules/aptpkg.py | 20 +++- salt/modules/dpkg_lowpkg.py | 93 +++++++++++++-- tests/unit/modules/test_aptpkg.py | 151 ++++++++++++++++--------- tests/unit/modules/test_dpkg_lowpkg.py | 69 +++++++++++ 4 files changed, 267 insertions(+), 66 deletions(-) diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py index 6b3a921a82..64620647c2 100644 --- a/salt/modules/aptpkg.py +++ b/salt/modules/aptpkg.py @@ -2776,6 +2776,15 @@ def info_installed(*names, **kwargs): .. versionadded:: 2016.11.3 + attr + Comma-separated package attributes. If no 'attr' is specified, all available attributes returned. + + Valid attributes are: + version, vendor, release, build_date, build_date_time_t, install_date, install_date_time_t, + build_host, group, source_rpm, arch, epoch, size, license, signature, packager, url, summary, description. + + .. versionadded:: Neon + CLI example: .. code-block:: bash @@ -2786,11 +2795,15 @@ def info_installed(*names, **kwargs): ''' kwargs = salt.utils.args.clean_kwargs(**kwargs) failhard = kwargs.pop('failhard', True) + kwargs.pop('errors', None) # Only for compatibility with RPM + attr = kwargs.pop('attr', None) # Package attributes to return + all_versions = kwargs.pop('all_versions', False) # This is for backward compatible structure only + if kwargs: salt.utils.args.invalid_kwargs(kwargs) ret = dict() - for pkg_name, pkg_nfo in __salt__['lowpkg.info'](*names, failhard=failhard).items(): + for pkg_name, pkg_nfo in __salt__['lowpkg.info'](*names, failhard=failhard, attr=attr).items(): t_nfo = dict() # Translate dpkg-specific keys to a common structure for key, value in pkg_nfo.items(): @@ -2807,7 +2820,10 @@ def info_installed(*names, **kwargs): else: t_nfo[key] = value - ret[pkg_name] = t_nfo + if all_versions: + ret.setdefault(pkg_name, []).append(t_nfo) + else: + ret[pkg_name] = t_nfo return ret diff --git a/salt/modules/dpkg_lowpkg.py b/salt/modules/dpkg_lowpkg.py index 03be5f821a..26ca5dcf5a 100644 --- a/salt/modules/dpkg_lowpkg.py +++ b/salt/modules/dpkg_lowpkg.py @@ -252,6 +252,38 @@ def file_dict(*packages): return {'errors': errors, 'packages': ret} +def _get_pkg_build_time(name): + ''' + Get package build time, if possible. + + :param name: + :return: + ''' + iso_time = iso_time_t = None + changelog_dir = os.path.join('/usr/share/doc', name) + if os.path.exists(changelog_dir): + for fname in os.listdir(changelog_dir): + try: + iso_time_t = int(os.path.getmtime(os.path.join(changelog_dir, fname))) + iso_time = datetime.datetime.utcfromtimestamp(iso_time_t).isoformat() + 'Z' + break + except OSError: + pass + + # Packager doesn't care about Debian standards, therefore Plan B: brute-force it. + if not iso_time: + for pkg_f_path in __salt__['cmd.run']('dpkg-query -L {}'.format(name)).splitlines(): + if 'changelog' in pkg_f_path.lower() and os.path.exists(pkg_f_path): + try: + iso_time_t = int(os.path.getmtime(pkg_f_path)) + iso_time = datetime.datetime.utcfromtimestamp(iso_time_t).isoformat() + 'Z' + break + except OSError: + pass + + return iso_time, iso_time_t + + def _get_pkg_info(*packages, **kwargs): ''' Return list of package information. If 'packages' parameter is empty, @@ -274,7 +306,7 @@ def _get_pkg_info(*packages, **kwargs): ret = [] cmd = "dpkg-query -W -f='package:" + bin_var + "\\n" \ "revision:${binary:Revision}\\n" \ - "architecture:${Architecture}\\n" \ + "arch:${Architecture}\\n" \ "maintainer:${Maintainer}\\n" \ "summary:${Summary}\\n" \ "source:${source:Package}\\n" \ @@ -307,9 +339,14 @@ def _get_pkg_info(*packages, **kwargs): key, value = pkg_info_line.split(":", 1) if value: pkg_data[key] = value - install_date = _get_pkg_install_time(pkg_data.get('package')) - if install_date: - pkg_data['install_date'] = install_date + install_date, install_date_t = _get_pkg_install_time(pkg_data.get('package'), pkg_data.get('arch')) + if install_date: + pkg_data['install_date'] = install_date + pkg_data['install_date_time_t'] = install_date_t # Unix ticks + build_date, build_date_t = _get_pkg_build_time(pkg_data.get('package')) + if build_date: + pkg_data['build_date'] = build_date + pkg_data['build_date_time_t'] = build_date_t pkg_data['description'] = pkg_descr.split(":", 1)[-1] ret.append(pkg_data) @@ -335,19 +372,32 @@ def _get_pkg_license(pkg): return ", ".join(sorted(licenses)) -def _get_pkg_install_time(pkg): +def _get_pkg_install_time(pkg, arch): ''' Return package install time, based on the /var/lib/dpkg/info/<package>.list :return: ''' - iso_time = None + iso_time = iso_time_t = None + loc_root = '/var/lib/dpkg/info' if pkg is not None: - location = "/var/lib/dpkg/info/{0}.list".format(pkg) - if os.path.exists(location): - iso_time = datetime.datetime.utcfromtimestamp(int(os.path.getmtime(location))).isoformat() + "Z" + locations = [] + if arch is not None and arch != 'all': + locations.append(os.path.join(loc_root, '{0}:{1}.list'.format(pkg, arch))) + + locations.append(os.path.join(loc_root, '{0}.list'.format(pkg))) + for location in locations: + try: + iso_time_t = int(os.path.getmtime(location)) + iso_time = datetime.datetime.utcfromtimestamp(iso_time_t).isoformat() + 'Z' + break + except OSError: + pass - return iso_time + if iso_time is None: + log.debug('Unable to get package installation time for package "%s".', pkg) + + return iso_time, iso_time_t def _get_pkg_ds_avail(): @@ -397,6 +447,15 @@ def info(*packages, **kwargs): .. versionadded:: 2016.11.3 + attr + Comma-separated package attributes. If no 'attr' is specified, all available attributes returned. + + Valid attributes are: + version, vendor, release, build_date, build_date_time_t, install_date, install_date_time_t, + build_host, group, source_rpm, arch, epoch, size, license, signature, packager, url, summary, description. + + .. versionadded:: Neon + CLI example: .. code-block:: bash @@ -411,6 +470,10 @@ def info(*packages, **kwargs): kwargs = salt.utils.args.clean_kwargs(**kwargs) failhard = kwargs.pop('failhard', True) + attr = kwargs.pop('attr', None) or None + if attr: + attr = attr.split(',') + if kwargs: salt.utils.args.invalid_kwargs(kwargs) @@ -430,6 +493,14 @@ def info(*packages, **kwargs): lic = _get_pkg_license(pkg['package']) if lic: pkg['license'] = lic - ret[pkg['package']] = pkg + + # Remove keys that aren't in attrs + pkg_name = pkg['package'] + if attr: + for k in list(pkg.keys())[:]: + if k not in attr: + del pkg[k] + + ret[pkg_name] = pkg return ret diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py index 1e963ee5db..580b840197 100644 --- a/tests/unit/modules/test_aptpkg.py +++ b/tests/unit/modules/test_aptpkg.py @@ -20,6 +20,8 @@ from tests.support.mock import Mock, MagicMock, patch, NO_MOCK, NO_MOCK_REASON from salt.ext import six from salt.exceptions import CommandExecutionError, SaltInvocationError import salt.modules.aptpkg as aptpkg +import pytest +import textwrap try: import pytest @@ -148,51 +150,39 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin): def setup_loader_modules(self): return {aptpkg: {}} + @patch('salt.modules.aptpkg.__salt__', + {'pkg_resource.version': MagicMock(return_value=LOWPKG_INFO['wget']['version'])}) def test_version(self): ''' Test - Returns a string representing the package version or an empty string if not installed. ''' - version = LOWPKG_INFO['wget']['version'] - mock = MagicMock(return_value=version) - with patch.dict(aptpkg.__salt__, {'pkg_resource.version': mock}): - self.assertEqual(aptpkg.version(*['wget']), version) + assert aptpkg.version(*['wget']) == aptpkg.__salt__['pkg_resource.version']() + @patch('salt.modules.aptpkg.latest_version', MagicMock(return_value='')) def test_upgrade_available(self): ''' Test - Check whether or not an upgrade is available for a given package. ''' - with patch('salt.modules.aptpkg.latest_version', - MagicMock(return_value='')): - self.assertFalse(aptpkg.upgrade_available('wget')) + assert not aptpkg.upgrade_available('wget') + @patch('salt.modules.aptpkg.get_repo_keys', MagicMock(return_value=REPO_KEYS)) + @patch('salt.modules.aptpkg.__salt__', {'cmd.run_all': MagicMock(return_value={'retcode': 0, 'stdout': 'OK'})}) def test_add_repo_key(self): ''' Test - Add a repo key. ''' - with patch('salt.modules.aptpkg.get_repo_keys', - MagicMock(return_value=REPO_KEYS)): - mock = MagicMock(return_value={ - 'retcode': 0, - 'stdout': 'OK' - }) - with patch.dict(aptpkg.__salt__, {'cmd.run_all': mock}): - self.assertTrue(aptpkg.add_repo_key(keyserver='keyserver.ubuntu.com', - keyid='FBB75451')) + assert aptpkg.add_repo_key(keyserver='keyserver.ubuntu.com', keyid='FBB75451') + @patch('salt.modules.aptpkg.get_repo_keys', MagicMock(return_value=REPO_KEYS)) + @patch('salt.modules.aptpkg.__salt__', {'cmd.run_all': MagicMock(return_value={'retcode': 0, 'stdout': 'OK'})}) def test_add_repo_key_failed(self): ''' Test - Add a repo key using incomplete input data. ''' - with patch('salt.modules.aptpkg.get_repo_keys', - MagicMock(return_value=REPO_KEYS)): - kwargs = {'keyserver': 'keyserver.ubuntu.com'} - mock = MagicMock(return_value={ - 'retcode': 0, - 'stdout': 'OK' - }) - with patch.dict(aptpkg.__salt__, {'cmd.run_all': mock}): - self.assertRaises(SaltInvocationError, aptpkg.add_repo_key, **kwargs) + with pytest.raises(SaltInvocationError) as ex: + aptpkg.add_repo_key(keyserver='keyserver.ubuntu.com') + assert ' No keyid or keyid too short for keyserver: keyserver.ubuntu.com' in str(ex) def test_get_repo_keys(self): ''' @@ -205,35 +195,31 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin): with patch.dict(aptpkg.__salt__, {'cmd.run_all': mock}): self.assertEqual(aptpkg.get_repo_keys(), REPO_KEYS) + @patch('salt.modules.aptpkg.__salt__', {'lowpkg.file_dict': MagicMock(return_value=LOWPKG_FILES)}) def test_file_dict(self): ''' Test - List the files that belong to a package, grouped by package. ''' - mock = MagicMock(return_value=LOWPKG_FILES) - with patch.dict(aptpkg.__salt__, {'lowpkg.file_dict': mock}): - self.assertEqual(aptpkg.file_dict('wget'), LOWPKG_FILES) + assert aptpkg.file_dict('wget') == LOWPKG_FILES + @patch('salt.modules.aptpkg.__salt__', { + 'lowpkg.file_list': MagicMock(return_value={'errors': LOWPKG_FILES['errors'], + 'files': LOWPKG_FILES['packages']['wget']})}) def test_file_list(self): ''' - Test - List the files that belong to a package. + Test 'file_list' function, which is just an alias to the lowpkg 'file_list' + ''' - files = { - 'errors': LOWPKG_FILES['errors'], - 'files': LOWPKG_FILES['packages']['wget'], - } - mock = MagicMock(return_value=files) - with patch.dict(aptpkg.__salt__, {'lowpkg.file_list': mock}): - self.assertEqual(aptpkg.file_list('wget'), files) + assert aptpkg.file_list('wget') == aptpkg.__salt__['lowpkg.file_list']() + @patch('salt.modules.aptpkg.__salt__', {'cmd.run_stdout': MagicMock(return_value='wget\t\t\t\t\t\tinstall')}) def test_get_selections(self): ''' Test - View package state from the dpkg database. ''' - selections = {'install': ['wget']} - mock = MagicMock(return_value='wget\t\t\t\t\t\tinstall') - with patch.dict(aptpkg.__salt__, {'cmd.run_stdout': mock}): - self.assertEqual(aptpkg.get_selections('wget'), selections) + assert aptpkg.get_selections('wget') == {'install': ['wget']} + @patch('salt.modules.aptpkg.__salt__', {'lowpkg.info': MagicMock(return_value=LOWPKG_INFO)}) def test_info_installed(self): ''' Test - Return the information of the named package(s) installed on the system. @@ -249,19 +235,72 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin): if installed['wget'].get(names[name], False): installed['wget'][name] = installed['wget'].pop(names[name]) - mock = MagicMock(return_value=LOWPKG_INFO) - with patch.dict(aptpkg.__salt__, {'lowpkg.info': mock}): - self.assertEqual(aptpkg.info_installed('wget'), installed) + assert aptpkg.info_installed('wget') == installed + + @patch('salt.modules.aptpkg.__salt__', {'lowpkg.info': MagicMock(return_value=LOWPKG_INFO)}) + def test_info_installed_attr(self): + ''' + Test info_installed 'attr'. + This doesn't test 'attr' behaviour per se, since the underlying function is in dpkg. + The test should simply not raise exceptions for invalid parameter. + + :return: + ''' + ret = aptpkg.info_installed('emacs', attr='foo,bar') + assert isinstance(ret, dict) + assert 'wget' in ret + assert isinstance(ret['wget'], dict) + + wget_pkg = ret['wget'] + expected_pkg = {'url': 'http://www.gnu.org/software/wget/', + 'packager': 'Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>', 'name': 'wget', + 'install_date': '2016-08-30T22:20:15Z', 'description': 'retrieves files from the web', + 'version': '1.15-1ubuntu1.14.04.2', 'architecture': 'amd64', 'group': 'web', 'source': 'wget'} + for k in wget_pkg: + assert k in expected_pkg + assert wget_pkg[k] == expected_pkg[k] + + @patch('salt.modules.aptpkg.__salt__', {'lowpkg.info': MagicMock(return_value=LOWPKG_INFO)}) + def test_info_installed_all_versions(self): + ''' + Test info_installed 'all_versions'. + Since Debian won't return same name packages with the different names, + this should just return different structure, backward compatible with + the RPM equivalents. + + :return: + ''' + print() + ret = aptpkg.info_installed('emacs', all_versions=True) + assert isinstance(ret, dict) + assert 'wget' in ret + assert isinstance(ret['wget'], list) + + pkgs = ret['wget'] + + assert len(pkgs) == 1 + assert isinstance(pkgs[0], dict) + + wget_pkg = pkgs[0] + expected_pkg = {'url': 'http://www.gnu.org/software/wget/', + 'packager': 'Ubuntu Developers <ubuntu-devel-discuss@lists.ubuntu.com>', 'name': 'wget', + 'install_date': '2016-08-30T22:20:15Z', 'description': 'retrieves files from the web', + 'version': '1.15-1ubuntu1.14.04.2', 'architecture': 'amd64', 'group': 'web', 'source': 'wget'} + for k in wget_pkg: + assert k in expected_pkg + assert wget_pkg[k] == expected_pkg[k] + @patch('salt.modules.aptpkg.__salt__', {'cmd.run_stdout': MagicMock(return_value='wget: /usr/bin/wget')}) def test_owner(self): ''' Test - Return the name of the package that owns the file. ''' - paths = ['/usr/bin/wget'] - mock = MagicMock(return_value='wget: /usr/bin/wget') - with patch.dict(aptpkg.__salt__, {'cmd.run_stdout': mock}): - self.assertEqual(aptpkg.owner(*paths), 'wget') + assert aptpkg.owner('/usr/bin/wget') == 'wget' + @patch('salt.utils.pkg.clear_rtag', MagicMock()) + @patch('salt.modules.aptpkg.__salt__', {'cmd.run_all': MagicMock(return_value={'retcode': 0, + 'stdout': APT_Q_UPDATE}), + 'config.get': MagicMock(return_value=False)}) def test_refresh_db(self): ''' Test - Updates the APT database to latest packages based upon repositories. @@ -281,6 +320,10 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin): with patch.dict(aptpkg.__salt__, {'cmd.run_all': mock, 'config.get': MagicMock(return_value=False)}): self.assertEqual(aptpkg.refresh_db(), refresh_db) + @patch('salt.utils.pkg.clear_rtag', MagicMock()) + @patch('salt.modules.aptpkg.__salt__', {'cmd.run_all': MagicMock(return_value={'retcode': 0, + 'stdout': APT_Q_UPDATE_ERROR}), + 'config.get': MagicMock(return_value=False)}) def test_refresh_db_failed(self): ''' Test - Update the APT database using unreachable repositories. @@ -312,22 +355,24 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin): assert aptpkg.autoremove(list_only=True) == [] assert aptpkg.autoremove(list_only=True, purge=True) == [] + @patch('salt.modules.aptpkg._uninstall', MagicMock(return_value=UNINSTALL)) def test_remove(self): ''' Test - Remove packages. ''' - with patch('salt.modules.aptpkg._uninstall', - MagicMock(return_value=UNINSTALL)): - self.assertEqual(aptpkg.remove(name='tmux'), UNINSTALL) + assert aptpkg.remove(name='tmux') == UNINSTALL + @patch('salt.modules.aptpkg._uninstall', MagicMock(return_value=UNINSTALL)) def test_purge(self): ''' Test - Remove packages along with all configuration files. ''' - with patch('salt.modules.aptpkg._uninstall', - MagicMock(return_value=UNINSTALL)): - self.assertEqual(aptpkg.purge(name='tmux'), UNINSTALL) + assert aptpkg.purge(name='tmux') == UNINSTALL + @patch('salt.utils.pkg.clear_rtag', MagicMock()) + @patch('salt.modules.aptpkg.list_pkgs', MagicMock(return_value=UNINSTALL)) + @patch.multiple(aptpkg, **{'__salt__': {'config.get': MagicMock(return_value=True), + 'cmd.run_all': MagicMock(return_value={'retcode': 0, 'stdout': UPGRADE})}}) def test_upgrade(self): ''' Test - Upgrades all packages. diff --git a/tests/unit/modules/test_dpkg_lowpkg.py b/tests/unit/modules/test_dpkg_lowpkg.py index bdcb7eec89..d16ce3cc1a 100644 --- a/tests/unit/modules/test_dpkg_lowpkg.py +++ b/tests/unit/modules/test_dpkg_lowpkg.py @@ -25,6 +25,30 @@ class DpkgTestCase(TestCase, LoaderModuleMockMixin): ''' Test cases for salt.modules.dpkg ''' + dselect_pkg = { + 'emacs': {'priority': 'optional', 'filename': 'pool/main/e/emacs-defaults/emacs_46.1_all.deb', + 'description': 'GNU Emacs editor (metapackage)', 'md5sum': '766eb2cee55ba0122dac64c4cea04445', + 'sha256': 'd172289b9a1608820eddad85c7ffc15f346a6e755c3120de0f64739c4bbc44ce', + 'description-md5': '21fb7da111336097a2378959f6d6e6a8', + 'bugs': 'https://bugs.launchpad.net/springfield/+filebug', + 'depends': 'emacs24 | emacs24-lucid | emacs24-nox', 'origin': 'Simpsons', 'version': '46.1', + 'task': 'ubuntu-usb, edubuntu-usb', 'original-maintainer': 'Homer Simpson <homer@springfield.org>', + 'package': 'emacs', 'architecture': 'all', 'size': '1692', + 'sha1': '9271bcec53c1f7373902b1e594d9fc0359616407', 'source': 'emacs-defaults', + 'maintainer': 'Simpsons Developers <simpsons-devel-discuss@lists.springfield.org>', 'supported': '9m', + 'section': 'editors', 'installed-size': '25'} + } + + pkgs_info = [ + {'version': '46.1', 'arch': 'all', 'build_date': '2014-08-07T16:51:48Z', 'install_date_time_t': 1481745778, + 'section': 'editors', 'description': 'GNU Emacs editor (metapackage)\n GNU Emacs is the extensible ' + 'self-documenting text editor.\n This is a metapackage that will always ' + 'depend on the latest\n recommended Emacs release.\n', + 'package': 'emacs', 'source': 'emacs-defaults', + 'maintainer': 'Simpsons Developers <simpsons-devel-discuss@lists.springfield.org>', + 'build_date_time_t': 1407430308, 'installed_size': '25', 'install_date': '2016-12-14T20:02:58Z'} + ] + def setup_loader_modules(self): return {dpkg: {}} @@ -102,3 +126,48 @@ class DpkgTestCase(TestCase, LoaderModuleMockMixin): 'stdout': 'Salt'}) with patch.dict(dpkg.__salt__, {'cmd.run_all': mock}): self.assertEqual(dpkg.file_dict('httpd'), 'Error: error') + + @patch('salt.modules.dpkg._get_pkg_ds_avail', MagicMock(return_value=dselect_pkg)) + @patch('salt.modules.dpkg._get_pkg_info', MagicMock(return_value=pkgs_info)) + @patch('salt.modules.dpkg._get_pkg_license', MagicMock(return_value='BSD v3')) + def test_info(self): + ''' + Test info + :return: + ''' + ret = dpkg.info('emacs') + + assert isinstance(ret, dict) + assert len(ret.keys()) == 1 + assert 'emacs' in ret + + pkg_data = ret['emacs'] + + assert isinstance(pkg_data, dict) + for pkg_section in ['section', 'architecture', 'original-maintainer', 'maintainer', 'package', 'installed-size', + 'build_date_time_t', 'sha256', 'origin', 'build_date', 'size', 'source', 'version', + 'install_date_time_t', 'license', 'priority', 'description', 'md5sum', 'supported', + 'filename', 'sha1', 'install_date', 'arch']: + assert pkg_section in pkg_data + + assert pkg_data['section'] == 'editors' + assert pkg_data['maintainer'] == 'Simpsons Developers <simpsons-devel-discuss@lists.springfield.org>' + assert pkg_data['license'] == 'BSD v3' + + @patch('salt.modules.dpkg._get_pkg_ds_avail', MagicMock(return_value=dselect_pkg)) + @patch('salt.modules.dpkg._get_pkg_info', MagicMock(return_value=pkgs_info)) + @patch('salt.modules.dpkg._get_pkg_license', MagicMock(return_value='BSD v3')) + def test_info_attr(self): + ''' + Test info with 'attr' parameter + :return: + ''' + ret = dpkg.info('emacs', attr='arch,license,version') + assert isinstance(ret, dict) + assert 'emacs' in ret + for attr in ['arch', 'license', 'version']: + assert attr in ret['emacs'] + + assert ret['emacs']['arch'] == 'all' + assert ret['emacs']['license'] == 'BSD v3' + assert ret['emacs']['version'] == '46.1' -- 2.20.1 ++++++ decide-if-the-source-should-be-actually-skipped.patch ++++++
From 5eacdf8fef35cdd05cae1b65485b3f820c86bc68 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Tue, 4 Dec 2018 16:39:08 +0100 Subject: [PATCH] Decide if the source should be actually skipped
--- salt/modules/aptpkg.py | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py index dc27903230..42d606926f 100644 --- a/salt/modules/aptpkg.py +++ b/salt/modules/aptpkg.py @@ -1698,6 +1698,27 @@ def list_repo_pkgs(*args, **kwargs): # pylint: disable=unused-import return ret +def _skip_source(source): + ''' + Decide to skip source or not. + + :param source: + :return: + ''' + if source.invalid: + if source.uri and source.type and source.type in ("deb", "deb-src", "rpm", "rpm-src"): + pieces = source.mysplit(source.line) + if pieces[1].strip()[0] == "[": + options = pieces.pop(1).strip("[]").split() + if len(options) > 0: + log.debug("Source %s will be included although is marked invalid", source.uri) + return False + return True + else: + return True + return False + + def list_repos(): ''' Lists all repos in the sources.list (and sources.lists.d) files @@ -1713,7 +1734,7 @@ def list_repos(): repos = {} sources = sourceslist.SourcesList() for source in sources.list: - if source.invalid: + if _skip_source(source): continue repo = {} repo['file'] = source.file -- 2.20.1 ++++++ delete-bad-api-token-files.patch ++++++
From 2a3e94535e41339aa3163d89151cee44e04fe5b1 Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Wed, 29 Jan 2020 15:49:35 +0100 Subject: [PATCH] Delete bad API token files
Under the following conditions and API token should be considered invalid: - The file is empty. - We cannot deserialize the token from the file. - The token exists but has no expiration date. - The token exists but has expired. All of these conditions necessitate deleting the token file. Otherwise we should simply return an empty token. --- salt/auth/__init__.py | 27 ++++++++++----- salt/exceptions.py | 6 ++++ salt/payload.py | 10 ++++-- tests/unit/test_auth.py | 69 +++++++++++++++++++++++++++++++++++++++ tests/unit/tokens/test_localfs.py | 47 +++++++++++++++++++++++++- 5 files changed, 147 insertions(+), 12 deletions(-) diff --git a/salt/auth/__init__.py b/salt/auth/__init__.py index a8aefa7091..e44c94fb37 100644 --- a/salt/auth/__init__.py +++ b/salt/auth/__init__.py @@ -25,7 +25,9 @@ from salt.ext import six # Import salt libs import salt.config +import salt.exceptions import salt.loader +import salt.payload import salt.transport.client import salt.utils.args import salt.utils.dictupdate @@ -34,7 +36,6 @@ import salt.utils.minions import salt.utils.user import salt.utils.versions import salt.utils.zeromq -import salt.payload log = logging.getLogger(__name__) @@ -246,16 +247,24 @@ class LoadAuth(object): Return the name associated with the token, or False if the token is not valid ''' - tdata = self.tokens["{0}.get_token".format(self.opts['eauth_tokens'])](self.opts, tok) - if not tdata: - return {} - - rm_tok = False - if 'expire' not in tdata: - # invalid token, delete it! + tdata = {} + try: + tdata = self.tokens["{0}.get_token".format(self.opts['eauth_tokens'])](self.opts, tok) + except salt.exceptions.SaltDeserializationError: + log.warning("Failed to load token %r - removing broken/empty file.", tok) rm_tok = True - if tdata.get('expire', '0') < time.time(): + else: + if not tdata: + return {} + rm_tok = False + + if tdata.get('expire', 0) < time.time(): + # If expire isn't present in the token it's invalid and needs + # to be removed. Also, if it's present and has expired - in + # other words, the expiration is before right now, it should + # be removed. rm_tok = True + if rm_tok: self.rm_token(tok) diff --git a/salt/exceptions.py b/salt/exceptions.py index cc6980b289..d0d865527d 100644 --- a/salt/exceptions.py +++ b/salt/exceptions.py @@ -351,6 +351,12 @@ class TokenAuthenticationError(SaltException): ''' +class SaltDeserializationError(SaltException): + ''' + Thrown when salt cannot deserialize data. + ''' + + class AuthorizationError(SaltException): ''' Thrown when runner or wheel execution fails due to permissions diff --git a/salt/payload.py b/salt/payload.py index ea569c9f73..dc34cf4dab 100644 --- a/salt/payload.py +++ b/salt/payload.py @@ -18,7 +18,7 @@ import salt.crypt import salt.transport.frame import salt.utils.immutabletypes as immutabletypes import salt.utils.stringutils -from salt.exceptions import SaltReqTimeoutError +from salt.exceptions import SaltReqTimeoutError, SaltDeserializationError from salt.utils.data import CaseInsensitiveDict # Import third party libs @@ -164,7 +164,13 @@ class Serial(object): ) log.debug('Msgpack deserialization failure on message: %s', msg) gc.collect() - raise + raise six.raise_from( + SaltDeserializationError( + 'Could not deserialize msgpack message.' + ' See log for more info.' + ), + exc, + ) finally: gc.enable() return ret diff --git a/tests/unit/test_auth.py b/tests/unit/test_auth.py index 5d88e82077..54c3915144 100644 --- a/tests/unit/test_auth.py +++ b/tests/unit/test_auth.py @@ -6,6 +6,8 @@ # Import pytohn libs from __future__ import absolute_import, print_function, unicode_literals +import time + # Import Salt Testing libs from tests.support.unit import TestCase, skipIf from tests.support.mock import patch, call, NO_MOCK, NO_MOCK_REASON, MagicMock @@ -14,6 +16,7 @@ from tests.support.mock import patch, call, NO_MOCK, NO_MOCK_REASON, MagicMock import salt.master from tests.support.case import ModuleCase from salt import auth +from salt.exceptions import SaltDeserializationError import salt.utils.platform @@ -37,6 +40,72 @@ class LoadAuthTestCase(TestCase): self.addCleanup(patcher.stop) self.lauth = auth.LoadAuth({}) # Load with empty opts + def test_get_tok_with_broken_file_will_remove_bad_token(self): + fake_get_token = MagicMock(side_effect=SaltDeserializationError('hi')) + patch_opts = patch.dict(self.lauth.opts, {'eauth_tokens': 'testfs'}) + patch_get_token = patch.dict( + self.lauth.tokens, + { + 'testfs.get_token': fake_get_token + }, + ) + mock_rm_token = MagicMock() + patch_rm_token = patch.object(self.lauth, 'rm_token', mock_rm_token) + with patch_opts, patch_get_token, patch_rm_token: + expected_token = 'fnord' + self.lauth.get_tok(expected_token) + mock_rm_token.assert_called_with(expected_token) + + def test_get_tok_with_no_expiration_should_remove_bad_token(self): + fake_get_token = MagicMock(return_value={'no_expire_here': 'Nope'}) + patch_opts = patch.dict(self.lauth.opts, {'eauth_tokens': 'testfs'}) + patch_get_token = patch.dict( + self.lauth.tokens, + { + 'testfs.get_token': fake_get_token + }, + ) + mock_rm_token = MagicMock() + patch_rm_token = patch.object(self.lauth, 'rm_token', mock_rm_token) + with patch_opts, patch_get_token, patch_rm_token: + expected_token = 'fnord' + self.lauth.get_tok(expected_token) + mock_rm_token.assert_called_with(expected_token) + + def test_get_tok_with_expire_before_current_time_should_remove_token(self): + fake_get_token = MagicMock(return_value={'expire': time.time()-1}) + patch_opts = patch.dict(self.lauth.opts, {'eauth_tokens': 'testfs'}) + patch_get_token = patch.dict( + self.lauth.tokens, + { + 'testfs.get_token': fake_get_token + }, + ) + mock_rm_token = MagicMock() + patch_rm_token = patch.object(self.lauth, 'rm_token', mock_rm_token) + with patch_opts, patch_get_token, patch_rm_token: + expected_token = 'fnord' + self.lauth.get_tok(expected_token) + mock_rm_token.assert_called_with(expected_token) + + def test_get_tok_with_valid_expiration_should_return_token(self): + expected_token = {'expire': time.time()+1} + fake_get_token = MagicMock(return_value=expected_token) + patch_opts = patch.dict(self.lauth.opts, {'eauth_tokens': 'testfs'}) + patch_get_token = patch.dict( + self.lauth.tokens, + { + 'testfs.get_token': fake_get_token + }, + ) + mock_rm_token = MagicMock() + patch_rm_token = patch.object(self.lauth, 'rm_token', mock_rm_token) + with patch_opts, patch_get_token, patch_rm_token: + token_name = 'fnord' + actual_token = self.lauth.get_tok(token_name) + mock_rm_token.assert_not_called() + assert expected_token is actual_token, 'Token was not returned' + def test_load_name(self): valid_eauth_load = {'username': 'test_user', 'show_timeout': False, diff --git a/tests/unit/tokens/test_localfs.py b/tests/unit/tokens/test_localfs.py index f950091252..b7d86d9f23 100644 --- a/tests/unit/tokens/test_localfs.py +++ b/tests/unit/tokens/test_localfs.py @@ -1,10 +1,14 @@ # -*- coding: utf-8 -*- +''' +Tests the localfs tokens interface. +''' from __future__ import absolute_import, print_function, unicode_literals import os -import salt.utils.files +import salt.exceptions import salt.tokens.localfs +import salt.utils.files from tests.support.unit import TestCase, skipIf from tests.support.helpers import with_tempdir @@ -51,3 +55,44 @@ class WriteTokenTest(TestCase): assert rename.called_with == [ ((temp_t_path, t_path), {}) ], rename.called_with + + +class TestLocalFS(TestCase): + def setUp(self): + # Default expected data + self.expected_data = {'this': 'is', 'some': 'token data'} + + @with_tempdir() + def test_get_token_should_return_token_if_exists(self, tempdir): + opts = {'token_dir': tempdir} + tok = salt.tokens.localfs.mk_token( + opts=opts, + tdata=self.expected_data, + )['token'] + actual_data = salt.tokens.localfs.get_token(opts=opts, tok=tok) + self.assertDictEqual(self.expected_data, actual_data) + + @with_tempdir() + def test_get_token_should_raise_SaltDeserializationError_if_token_file_is_empty(self, tempdir): + opts = {'token_dir': tempdir} + tok = salt.tokens.localfs.mk_token( + opts=opts, + tdata=self.expected_data, + )['token'] + with open(os.path.join(tempdir, tok), 'w') as f: + f.truncate() + with self.assertRaises(salt.exceptions.SaltDeserializationError) as e: + salt.tokens.localfs.get_token(opts=opts, tok=tok) + + @with_tempdir() + def test_get_token_should_raise_SaltDeserializationError_if_token_file_is_malformed(self, tempdir): + opts = {'token_dir': tempdir} + tok = salt.tokens.localfs.mk_token( + opts=opts, + tdata=self.expected_data, + )['token'] + with open(os.path.join(tempdir, tok), 'w') as f: + f.truncate() + f.write('this is not valid msgpack data') + with self.assertRaises(salt.exceptions.SaltDeserializationError) as e: + salt.tokens.localfs.get_token(opts=opts, tok=tok) -- 2.16.4 ++++++ do-not-break-repo-files-with-multiple-line-values-on.patch ++++++
From b99e55aab52d086315d54cf44af68f40dcf79dc9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Wed, 29 May 2019 11:03:16 +0100 Subject: [PATCH] Do not break repo files with multiple line values on yumpkg (bsc#1135360)
--- salt/modules/yumpkg.py | 16 ++++++--- tests/integration/modules/test_pkg.py | 48 +++++++++++++++++++++++++++ 2 files changed, 60 insertions(+), 4 deletions(-) diff --git a/salt/modules/yumpkg.py b/salt/modules/yumpkg.py index 5ec3835574..3a4fe47a45 100644 --- a/salt/modules/yumpkg.py +++ b/salt/modules/yumpkg.py @@ -2763,7 +2763,12 @@ def del_repo(repo, basedir=None, **kwargs): # pylint: disable=W0613 del filerepos[stanza]['comments'] content += '\n[{0}]'.format(stanza) for line in filerepos[stanza]: - content += '\n{0}={1}'.format(line, filerepos[stanza][line]) + # A whitespace is needed at the begining of the new line in order + # to avoid breaking multiple line values allowed on repo files. + value = filerepos[stanza][line] + if isinstance(value, six.string_types) and '\n' in value: + value = '\n '.join(value.split('\n')) + content += '\n{0}={1}'.format(line, value) content += '\n{0}\n'.format(comments) with salt.utils.files.fopen(repofile, 'w') as fileout: @@ -2898,11 +2903,14 @@ def mod_repo(repo, basedir=None, **kwargs): ) content += '[{0}]\n'.format(stanza) for line in six.iterkeys(filerepos[stanza]): + # A whitespace is needed at the begining of the new line in order + # to avoid breaking multiple line values allowed on repo files. + value = filerepos[stanza][line] + if isinstance(value, six.string_types) and '\n' in value: + value = '\n '.join(value.split('\n')) content += '{0}={1}\n'.format( line, - filerepos[stanza][line] - if not isinstance(filerepos[stanza][line], bool) - else _bool_to_str(filerepos[stanza][line]) + value if not isinstance(value, bool) else _bool_to_str(value) ) content += comments + '\n' diff --git a/tests/integration/modules/test_pkg.py b/tests/integration/modules/test_pkg.py index 0271cea81f..a82c9662c7 100644 --- a/tests/integration/modules/test_pkg.py +++ b/tests/integration/modules/test_pkg.py @@ -123,6 +123,54 @@ class PkgModuleTest(ModuleCase, SaltReturnAssertsMixin): if repo is not None: self.run_function('pkg.del_repo', [repo]) + def test_mod_del_repo_multiline_values(self): + ''' + test modifying and deleting a software repository defined with multiline values + ''' + os_grain = self.run_function('grains.item', ['os'])['os'] + repo = None + try: + if os_grain in ['CentOS', 'RedHat', 'SUSE']: + my_baseurl = 'http://my.fake.repo/foo/bar/\n http://my.fake.repo.alt/foo/bar/' + expected_get_repo_baseurl = 'http://my.fake.repo/foo/bar/\nhttp://my.fake.repo.alt/foo/bar/' + major_release = int( + self.run_function( + 'grains.item', + ['osmajorrelease'] + )['osmajorrelease'] + ) + repo = 'fakerepo' + name = 'Fake repo for RHEL/CentOS/SUSE' + baseurl = my_baseurl + gpgkey = 'https://my.fake.repo/foo/bar/MY-GPG-KEY.pub' + failovermethod = 'priority' + gpgcheck = 1 + enabled = 1 + ret = self.run_function( + 'pkg.mod_repo', + [repo], + name=name, + baseurl=baseurl, + gpgkey=gpgkey, + gpgcheck=gpgcheck, + enabled=enabled, + failovermethod=failovermethod, + ) + # return data from pkg.mod_repo contains the file modified at + # the top level, so use next(iter(ret)) to get that key + self.assertNotEqual(ret, {}) + repo_info = ret[next(iter(ret))] + self.assertIn(repo, repo_info) + self.assertEqual(repo_info[repo]['baseurl'], my_baseurl) + ret = self.run_function('pkg.get_repo', [repo]) + self.assertEqual(ret['baseurl'], expected_get_repo_baseurl) + self.run_function('pkg.mod_repo', [repo]) + ret = self.run_function('pkg.get_repo', [repo]) + self.assertEqual(ret['baseurl'], expected_get_repo_baseurl) + finally: + if repo is not None: + self.run_function('pkg.del_repo', [repo]) + @requires_salt_modules('pkg.owner') def test_owner(self): ''' -- 2.21.0 ++++++ do-not-crash-when-there-are-ipv6-established-connect.patch ++++++
From f185eabfb4b529157cf7464b32beebeb8b944310 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Tue, 7 May 2019 15:33:51 +0100 Subject: [PATCH] Do not crash when there are IPv6 established connections (bsc#1130784)
Add unit test for '_netlink_tool_remote_on' --- salt/utils/network.py | 9 +++++---- tests/unit/utils/test_network.py | 16 ++++++++++++++++ 2 files changed, 21 insertions(+), 4 deletions(-) diff --git a/salt/utils/network.py b/salt/utils/network.py index c72d2aec41..3f0522b9a5 100644 --- a/salt/utils/network.py +++ b/salt/utils/network.py @@ -1457,7 +1457,7 @@ def _parse_tcp_line(line): def _netlink_tool_remote_on(port, which_end): ''' - Returns set of ipv4 host addresses of remote established connections + Returns set of IPv4/IPv6 host addresses of remote established connections on local or remote tcp port. Parses output of shell 'ss' to get connections @@ -1467,6 +1467,7 @@ def _netlink_tool_remote_on(port, which_end): LISTEN 0 511 *:80 *:* LISTEN 0 128 *:22 *:* ESTAB 0 0 127.0.0.1:56726 127.0.0.1:4505 + ESTAB 0 0 [::ffff:127.0.0.1]:41323 [::ffff:127.0.0.1]:4505 ''' remotes = set() valid = False @@ -1486,14 +1487,14 @@ def _netlink_tool_remote_on(port, which_end): elif 'ESTAB' not in line: continue chunks = line.split() - local_host, local_port = chunks[3].split(':', 1) - remote_host, remote_port = chunks[4].split(':', 1) + local_host, local_port = chunks[3].rsplit(':', 1) + remote_host, remote_port = chunks[4].rsplit(':', 1) if which_end == 'remote_port' and int(remote_port) != port: continue if which_end == 'local_port' and int(local_port) != port: continue - remotes.add(remote_host) + remotes.add(remote_host.strip("[]")) if valid is False: remotes = None diff --git a/tests/unit/utils/test_network.py b/tests/unit/utils/test_network.py index ca627777a7..ecf7d7c45b 100644 --- a/tests/unit/utils/test_network.py +++ b/tests/unit/utils/test_network.py @@ -120,6 +120,14 @@ USER COMMAND PID FD PROTO LOCAL ADDRESS FOREIGN ADDRESS salt-master python2.781106 35 tcp4 127.0.0.1:61115 127.0.0.1:4506 ''' +LINUX_NETLINK_SS_OUTPUT = '''\ +State Recv-Q Send-Q Local Address:Port Peer Address:Port +TIME-WAIT 0 0 [::1]:8009 [::1]:40368 +LISTEN 0 128 127.0.0.1:5903 0.0.0.0:* +ESTAB 0 0 [::ffff:127.0.0.1]:4506 [::ffff:127.0.0.1]:32315 +ESTAB 0 0 192.168.122.1:4506 192.168.122.177:24545 +''' + IPV4_SUBNETS = {True: ('10.10.0.0/24',), False: ('10.10.0.0', '10.10.0.0/33', 'FOO', 9, '0.9.800.1000/24')} IPV6_SUBNETS = {True: ('::1/128',), @@ -453,6 +461,14 @@ class NetworkTestCase(TestCase): remotes = network._freebsd_remotes_on('4506', 'remote') self.assertEqual(remotes, set(['127.0.0.1'])) + def test_netlink_tool_remote_on(self): + with patch('salt.utils.platform.is_sunos', lambda: False): + with patch('salt.utils.platform.is_linux', lambda: True): + with patch('subprocess.check_output', + return_value=LINUX_NETLINK_SS_OUTPUT): + remotes = network._netlink_tool_remote_on('4506', 'local') + self.assertEqual(remotes, set(['192.168.122.177', '::ffff:127.0.0.1'])) + def test_generate_minion_id_distinct(self): ''' Test if minion IDs are distinct in the pool. -- 2.17.1 ++++++ do-not-load-pip-state-if-there-is-no-3rd-party-depen.patch ++++++
From ab7d69b3438c719f7ad6b4b346e56556e8a7bd10 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Fri, 21 Sep 2018 17:31:39 +0200 Subject: [PATCH] Do not load pip state if there is no 3rd party dependencies
Safe import 3rd party dependency --- salt/modules/pip.py | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/salt/modules/pip.py b/salt/modules/pip.py index eac40c719c..988ae695a7 100644 --- a/salt/modules/pip.py +++ b/salt/modules/pip.py @@ -79,7 +79,10 @@ from __future__ import absolute_import, print_function, unicode_literals # Import python libs import logging import os -import pkg_resources +try: + import pkg_resources +except ImportError: + pkg_resources = None import re import shutil import sys @@ -116,7 +119,12 @@ def __virtual__(): entire filesystem. If it's not installed in a conventional location, the user is required to provide the location of pip each time it is used. ''' - return 'pip' + if pkg_resources is None: + ret = False, 'Package dependency "pkg_resource" is missing' + else: + ret = 'pip' + + return ret def _clear_context(bin_env=None): -- 2.17.1 ++++++ do-not-make-ansiblegate-to-crash-on-python3-minions.patch ++++++
From 189a19b6e8d28cc49e5ad5f2a683e1dfdce66a86 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Fri, 28 Jun 2019 15:17:56 +0100 Subject: [PATCH] Do not make ansiblegate to crash on Python3 minions
Fix pylint issues Move MockTimedProc implementation to tests.support.mock Add unit test for ansible caller --- salt/modules/ansiblegate.py | 14 +++++++-- tests/support/mock.py | 31 +++++++++++++++++++ tests/unit/modules/test_ansiblegate.py | 41 ++++++++++++++++++++++++++ tests/unit/modules/test_cmdmod.py | 34 +-------------------- 4 files changed, 84 insertions(+), 36 deletions(-) diff --git a/salt/modules/ansiblegate.py b/salt/modules/ansiblegate.py index 771db6d6aa..88e8147573 100644 --- a/salt/modules/ansiblegate.py +++ b/salt/modules/ansiblegate.py @@ -147,6 +147,10 @@ class AnsibleModuleCaller(object): :param kwargs: keywords to the module :return: ''' + if six.PY3: + python_exec = 'python3' + else: + python_exec = 'python' module = self._resolver.load_module(module) if not hasattr(module, 'main'): @@ -162,9 +166,13 @@ class AnsibleModuleCaller(object): ["echo", "{0}".format(js_args)], stdout=subprocess.PIPE, timeout=self.timeout) proc_out.run() + if six.PY3: + proc_out_stdout = proc_out.stdout.decode() + else: + proc_out_stdout = proc_out.stdout proc_exc = salt.utils.timed_subprocess.TimedProc( - ['python', module.__file__], - stdin=proc_out.stdout, stdout=subprocess.PIPE, timeout=self.timeout) + [python_exec, module.__file__], + stdin=proc_out_stdout, stdout=subprocess.PIPE, timeout=self.timeout) proc_exc.run() try: @@ -263,7 +271,7 @@ def help(module=None, *args): description = doc.get('description') or '' del doc['description'] ret['Description'] = description - ret['Available sections on module "{}"'.format(module.__name__.replace('ansible.modules.', ''))] = doc.keys() + ret['Available sections on module "{}"'.format(module.__name__.replace('ansible.modules.', ''))] = [i for i in doc.keys()] else: for arg in args: info = doc.get(arg) diff --git a/tests/support/mock.py b/tests/support/mock.py index 38b68bd5c4..4b44c112ee 100644 --- a/tests/support/mock.py +++ b/tests/support/mock.py @@ -510,6 +510,37 @@ class MockOpen(object): ret.extend(fh_.writelines_calls) return ret +class MockTimedProc(object): + ''' + Class used as a stand-in for salt.utils.timed_subprocess.TimedProc + ''' + class _Process(object): + ''' + Used to provide a dummy "process" attribute + ''' + def __init__(self, returncode=0, pid=12345): + self.returncode = returncode + self.pid = pid + + def __init__(self, stdout=None, stderr=None, returncode=0, pid=12345): + if stdout is not None and not isinstance(stdout, bytes): + raise TypeError('Must pass stdout to MockTimedProc as bytes') + if stderr is not None and not isinstance(stderr, bytes): + raise TypeError('Must pass stderr to MockTimedProc as bytes') + self._stdout = stdout + self._stderr = stderr + self.process = self._Process(returncode=returncode, pid=pid) + + def run(self): + pass + + @property + def stdout(self): + return self._stdout + + @property + def stderr(self): + return self._stderr # reimplement mock_open to support multiple filehandles mock_open = MockOpen diff --git a/tests/unit/modules/test_ansiblegate.py b/tests/unit/modules/test_ansiblegate.py index 1fbb083eb7..70b47f8bc2 100644 --- a/tests/unit/modules/test_ansiblegate.py +++ b/tests/unit/modules/test_ansiblegate.py @@ -29,6 +29,7 @@ from tests.support.unit import TestCase, skipIf from tests.support.mock import ( patch, MagicMock, + MockTimedProc, NO_MOCK, NO_MOCK_REASON ) @@ -36,6 +37,7 @@ from tests.support.mock import ( import salt.modules.ansiblegate as ansible import salt.utils.platform from salt.exceptions import LoaderError +from salt.ext import six @skipIf(NO_MOCK, NO_MOCK_REASON) @@ -137,3 +139,42 @@ description: ''' with patch('salt.modules.ansiblegate.ansible', None): assert ansible.__virtual__() == 'ansible' + + def test_ansible_module_call(self): + ''' + Test Ansible module call from ansible gate module + + :return: + ''' + + class Module(object): + ''' + An ansible module mock. + ''' + __name__ = 'one.two.three' + __file__ = 'foofile' + + def main(): + pass + + ANSIBLE_MODULE_ARGS = '{"ANSIBLE_MODULE_ARGS": ["arg_1", {"kwarg1": "foobar"}]}' + + proc = MagicMock(side_effect=[ + MockTimedProc( + stdout=ANSIBLE_MODULE_ARGS.encode(), + stderr=None), + MockTimedProc(stdout='{"completed": true}'.encode(), stderr=None) + ]) + + with patch.object(ansible, '_resolver', self.resolver), \ + patch.object(ansible._resolver, 'load_module', MagicMock(return_value=Module())): + _ansible_module_caller = ansible.AnsibleModuleCaller(ansible._resolver) + with patch('salt.utils.timed_subprocess.TimedProc', proc): + ret = _ansible_module_caller.call("one.two.three", "arg_1", kwarg1="foobar") + if six.PY3: + proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"kwarg1": "foobar", "_raw_params": "arg_1"}}'], stdout=-1, timeout=1200) + proc.assert_any_call(['python3', 'foofile'], stdin=ANSIBLE_MODULE_ARGS, stdout=-1, timeout=1200) + else: + proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"_raw_params": "arg_1", "kwarg1": "foobar"}}'], stdout=-1, timeout=1200) + proc.assert_any_call(['python', 'foofile'], stdin=ANSIBLE_MODULE_ARGS, stdout=-1, timeout=1200) + assert ret == {"completed": True, "timeout": 1200} diff --git a/tests/unit/modules/test_cmdmod.py b/tests/unit/modules/test_cmdmod.py index 8da672dd22..a20afaca0f 100644 --- a/tests/unit/modules/test_cmdmod.py +++ b/tests/unit/modules/test_cmdmod.py @@ -24,6 +24,7 @@ from tests.support.paths import FILES from tests.support.mock import ( mock_open, Mock, + MockTimedProc, MagicMock, NO_MOCK, NO_MOCK_REASON, @@ -36,39 +37,6 @@ MOCK_SHELL_FILE = '# List of acceptable shells\n' \ '/bin/bash\n' -class MockTimedProc(object): - ''' - Class used as a stand-in for salt.utils.timed_subprocess.TimedProc - ''' - class _Process(object): - ''' - Used to provide a dummy "process" attribute - ''' - def __init__(self, returncode=0, pid=12345): - self.returncode = returncode - self.pid = pid - - def __init__(self, stdout=None, stderr=None, returncode=0, pid=12345): - if stdout is not None and not isinstance(stdout, bytes): - raise TypeError('Must pass stdout to MockTimedProc as bytes') - if stderr is not None and not isinstance(stderr, bytes): - raise TypeError('Must pass stderr to MockTimedProc as bytes') - self._stdout = stdout - self._stderr = stderr - self.process = self._Process(returncode=returncode, pid=pid) - - def run(self): - pass - - @property - def stdout(self): - return self._stdout - - @property - def stderr(self): - return self._stderr - - @skipIf(NO_MOCK, NO_MOCK_REASON) class CMDMODTestCase(TestCase, LoaderModuleMockMixin): ''' -- 2.21.0 ++++++ do-not-report-patches-as-installed-when-not-all-the-.patch ++++++
From 769c9e85499bc9912b050fff7d3105690f1d7c7b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Wed, 13 Mar 2019 16:14:07 +0000 Subject: [PATCH] Do not report patches as installed when not all the related packages are installed (bsc#1128061)
Co-authored-by: Mihai Dinca <mdinca@suse.de> --- salt/modules/yumpkg.py | 18 ++++++++++++------ 1 file changed, 12 insertions(+), 6 deletions(-) diff --git a/salt/modules/yumpkg.py b/salt/modules/yumpkg.py index 4f26a41670..5ec3835574 100644 --- a/salt/modules/yumpkg.py +++ b/salt/modules/yumpkg.py @@ -3212,12 +3212,18 @@ def _get_patches(installed_only=False): for line in salt.utils.itertools.split(ret, os.linesep): inst, advisory_id, sev, pkg = re.match(r'([i|\s]) ([^\s]+) +([^\s]+) +([^\s]+)', line).groups() - if inst != 'i' and installed_only: - continue - patches[advisory_id] = { - 'installed': True if inst == 'i' else False, - 'summary': pkg - } + if not advisory_id in patches: + patches[advisory_id] = { + 'installed': True if inst == 'i' else False, + 'summary': [pkg] + } + else: + patches[advisory_id]['summary'].append(pkg) + if inst != 'i': + patches[advisory_id]['installed'] = False + + if installed_only: + patches = {k: v for k, v in patches.items() if v['installed']} return patches -- 2.20.1 ++++++ don-t-call-zypper-with-more-than-one-no-refresh.patch ++++++
From 5e0fe08c6afd75a7d65d6ccd6cf6b4b197fb1064 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat@suse.com> Date: Tue, 29 Jan 2019 09:44:03 +0100 Subject: [PATCH] Don't call zypper with more than one --no-refresh
Now zypper started being picky and errors out when --no-refresh is passed twice. Make sure we won't hit this. --- salt/modules/zypperpkg.py | 2 +- tests/unit/modules/test_zypperpkg.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py index 92e7052020..7ac0df26c6 100644 --- a/salt/modules/zypperpkg.py +++ b/salt/modules/zypperpkg.py @@ -282,7 +282,7 @@ class _Zypper(object): self.__called = True if self.__xml: self.__cmd.append('--xmlout') - if not self.__refresh: + if not self.__refresh and '--no-refresh' not in args: self.__cmd.append('--no-refresh') self.__cmd.extend(args) diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py index f586c23fd0..5c5091a570 100644 --- a/tests/unit/modules/test_zypperpkg.py +++ b/tests/unit/modules/test_zypperpkg.py @@ -138,7 +138,7 @@ class ZypperTestCase(TestCase, LoaderModuleMockMixin): self.assertEqual(zypper.__zypper__.call('foo'), stdout_xml_snippet) self.assertEqual(len(sniffer.calls), 1) - zypper.__zypper__.call('bar') + zypper.__zypper__.call('--no-refresh', 'bar') self.assertEqual(len(sniffer.calls), 2) self.assertEqual(sniffer.calls[0]['args'][0], ['zypper', '--non-interactive', '--no-refresh', 'foo']) self.assertEqual(sniffer.calls[1]['args'][0], ['zypper', '--non-interactive', '--no-refresh', 'bar']) -- 2.20.1 ++++++ early-feature-support-config.patch ++++++ ++++ 1996 lines (skipped) ++++++ enable-passing-a-unix_socket-for-mysql-returners-bsc.patch ++++++
From d937d1edb837bc084c1eaa320e8433382135e2d9 Mon Sep 17 00:00:00 2001 From: Maximilian Meister <mmeister@suse.de> Date: Thu, 3 May 2018 15:52:23 +0200 Subject: [PATCH] enable passing a unix_socket for mysql returners (bsc#1091371)
quick fix for: https://bugzilla.suse.com/show_bug.cgi?id=1091371 the upstream patch will go through some bigger refactoring of the mysql drivers to be cleaner this patch should only be temporary and can be dropped again once the refactor is done upstream Signed-off-by: Maximilian Meister <mmeister@suse.de> --- salt/returners/mysql.py | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) diff --git a/salt/returners/mysql.py b/salt/returners/mysql.py index 85892cb06c..a286731d5c 100644 --- a/salt/returners/mysql.py +++ b/salt/returners/mysql.py @@ -18,6 +18,7 @@ config. These are the defaults: mysql.pass: 'salt' mysql.db: 'salt' mysql.port: 3306 + mysql.unix_socket: '/tmp/mysql.sock' SSL is optional. The defaults are set to None. If you do not want to use SSL, either exclude these options or set them to None. @@ -43,6 +44,7 @@ optional. The following ssl options are simply for illustration purposes: alternative.mysql.ssl_ca: '/etc/pki/mysql/certs/localhost.pem' alternative.mysql.ssl_cert: '/etc/pki/mysql/certs/localhost.crt' alternative.mysql.ssl_key: '/etc/pki/mysql/certs/localhost.key' + alternative.mysql.unix_socket: '/tmp/mysql.sock' Should you wish the returner data to be cleaned out every so often, set `keep_jobs` to the number of hours for the jobs to live in the tables. @@ -198,7 +200,8 @@ def _get_options(ret=None): 'port': 3306, 'ssl_ca': None, 'ssl_cert': None, - 'ssl_key': None} + 'ssl_key': None, + 'unix_socket': '/tmp/mysql.sock'} attrs = {'host': 'host', 'user': 'user', @@ -207,7 +210,8 @@ def _get_options(ret=None): 'port': 'port', 'ssl_ca': 'ssl_ca', 'ssl_cert': 'ssl_cert', - 'ssl_key': 'ssl_key'} + 'ssl_key': 'ssl_key', + 'unix_socket': 'unix_socket'} _options = salt.returners.get_returner_options(__virtualname__, ret, @@ -261,7 +265,8 @@ def _get_serv(ret=None, commit=False): passwd=_options.get('pass'), db=_options.get('db'), port=_options.get('port'), - ssl=ssl_options) + ssl=ssl_options, + unix_socket=_options.get('unix_socket')) try: __context__['mysql_returner_conn'] = conn -- 2.13.7 ++++++ enable-passing-grains-to-start-event-based-on-start_.patch ++++++
From 0864a23ddef2a1b707c72373b998643a43bd710c Mon Sep 17 00:00:00 2001 From: Abid Mehmood <amehmood@suse.de> Date: Thu, 1 Aug 2019 13:14:22 +0200 Subject: [PATCH] enable passing grains to start event based on 'start_event_grains' configuration parameter
unit tests --- conf/minion | 5 ++++ doc/ref/configuration/minion.rst | 15 +++++++++++ salt/config/__init__.py | 1 + salt/minion.py | 5 ++++ tests/unit/test_minion.py | 55 ++++++++++++++++++++++++++++++++++++++++ 5 files changed, 81 insertions(+) diff --git a/conf/minion b/conf/minion index f2b6655932..cc7e962120 100644 --- a/conf/minion +++ b/conf/minion @@ -548,6 +548,11 @@ # - edit.vim # - hyper # +# List of grains to pass in start event when minion starts up: +#start_event_grains: +# - machine_id +# - uuid +# # Top file to execute if startup_states is 'top': #top_file: '' diff --git a/doc/ref/configuration/minion.rst b/doc/ref/configuration/minion.rst index 4d02140f02..7dd84fb2aa 100644 --- a/doc/ref/configuration/minion.rst +++ b/doc/ref/configuration/minion.rst @@ -2000,6 +2000,21 @@ List of states to run when the minion starts up if ``startup_states`` is set to - edit.vim - hyper +.. conf_minion:: start_event_grains + +``start_event_grains`` +---------------------- + +Default: ``[]`` + +List of grains to pass in start event when minion starts up. + +.. code-block:: yaml + + start_event_grains: + - machine_id + - uuid + .. conf_minion:: top_file ``top_file`` diff --git a/salt/config/__init__.py b/salt/config/__init__.py index dc257ff8b8..6eaab1fdae 100644 --- a/salt/config/__init__.py +++ b/salt/config/__init__.py @@ -1282,6 +1282,7 @@ DEFAULT_MINION_OPTS = { 'state_top_saltenv': None, 'startup_states': '', 'sls_list': [], + 'start_event_grains': [], 'top_file': '', 'thoriumenv': None, 'thorium_top': 'top.sls', diff --git a/salt/minion.py b/salt/minion.py index 97f74bf47e..4c7ea0491c 100644 --- a/salt/minion.py +++ b/salt/minion.py @@ -1443,6 +1443,11 @@ class Minion(MinionBase): else: return + if self.opts['start_event_grains']: + grains_to_add = dict( + [(k, v) for k, v in six.iteritems(self.opts.get('grains', {})) if k in self.opts['start_event_grains']]) + load['grains'] = grains_to_add + if sync: try: self._send_req_sync(load, timeout) diff --git a/tests/unit/test_minion.py b/tests/unit/test_minion.py index c4cfff9b0b..7913b9cd01 100644 --- a/tests/unit/test_minion.py +++ b/tests/unit/test_minion.py @@ -282,6 +282,61 @@ class MinionTestCase(TestCase, AdaptedConfigurationTestCaseMixin): finally: minion.destroy() + def test_when_ping_interval_is_set_the_callback_should_be_added_to_periodic_callbacks(self): + with patch('salt.minion.Minion.ctx', MagicMock(return_value={})), \ + patch('salt.minion.Minion.sync_connect_master', MagicMock(side_effect=RuntimeError('stop execution'))), \ + patch('salt.utils.process.SignalHandlingMultiprocessingProcess.start', MagicMock(return_value=True)), \ + patch('salt.utils.process.SignalHandlingMultiprocessingProcess.join', MagicMock(return_value=True)): + mock_opts = self.get_config('minion', from_scratch=True) + mock_opts['ping_interval'] = 10 + io_loop = tornado.ioloop.IOLoop() + io_loop.make_current() + minion = salt.minion.Minion(mock_opts, io_loop=io_loop) + try: + try: + minion.connected = MagicMock(side_effect=(False, True)) + minion._fire_master_minion_start = MagicMock() + minion.tune_in(start=False) + except RuntimeError: + pass + + # Make sure the scheduler is initialized but the beacons are not + self.assertTrue('ping' in minion.periodic_callbacks) + finally: + minion.destroy() + + def test_when_passed_start_event_grains(self): + mock_opts = self.get_config('minion', from_scratch=True) + mock_opts['start_event_grains'] = ["os"] + io_loop = tornado.ioloop.IOLoop() + io_loop.make_current() + minion = salt.minion.Minion(mock_opts, io_loop=io_loop) + try: + minion.tok = MagicMock() + minion._send_req_sync = MagicMock() + minion._fire_master('Minion has started', 'minion_start') + load = minion._send_req_sync.call_args[0][0] + + self.assertTrue('grains' in load) + self.assertTrue('os' in load['grains']) + finally: + minion.destroy() + + def test_when_not_passed_start_event_grains(self): + mock_opts = self.get_config('minion', from_scratch=True) + io_loop = tornado.ioloop.IOLoop() + io_loop.make_current() + minion = salt.minion.Minion(mock_opts, io_loop=io_loop) + try: + minion.tok = MagicMock() + minion._send_req_sync = MagicMock() + minion._fire_master('Minion has started', 'minion_start') + load = minion._send_req_sync.call_args[0][0] + + self.assertTrue('grains' not in load) + finally: + minion.destroy() + def test_minion_retry_dns_count(self): ''' Tests that the resolve_dns will retry dns look ups for a maximum of -- 2.16.4 ++++++ fall-back-to-pymysql.patch ++++++
From d3b2f157643845d2659a226ba72ce24ce1d2a73d Mon Sep 17 00:00:00 2001 From: Maximilian Meister <mmeister@suse.de> Date: Thu, 5 Apr 2018 13:23:23 +0200 Subject: [PATCH] fall back to PyMySQL
same is already done in modules (see #26803) Signed-off-by: Maximilian Meister <mmeister@suse.de> --- salt/modules/mysql.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/salt/modules/mysql.py b/salt/modules/mysql.py index de8916f4f2..64c773f40a 100644 --- a/salt/modules/mysql.py +++ b/salt/modules/mysql.py @@ -58,7 +58,7 @@ try: import MySQLdb.cursors import MySQLdb.converters from MySQLdb.constants import FIELD_TYPE, FLAG - from MySQLdb import OperationalError + from MySQLdb.connections import OperationalError except ImportError: try: # MySQLdb import failed, try to import PyMySQL @@ -68,7 +68,7 @@ except ImportError: import MySQLdb.cursors import MySQLdb.converters from MySQLdb.constants import FIELD_TYPE, FLAG - from MySQLdb import OperationalError + from MySQLdb.err import OperationalError except ImportError: MySQLdb = None -- 2.17.1 ++++++ fix-a-wrong-rebase-in-test_core.py-180.patch ++++++
From 329f90fcde205237545cd623f55f0f6c228bf893 Mon Sep 17 00:00:00 2001 From: Alberto Planas <aplanas@gmail.com> Date: Fri, 25 Oct 2019 15:43:16 +0200 Subject: [PATCH] Fix a wrong rebase in test_core.py (#180)
* core: ignore wrong product_name files Some firmwares (like some NUC machines), do not provide valid /sys/class/dmi/id/product_name strings. In those cases an UnicodeDecodeError exception happens. This patch ignore this kind of issue during the grains creation. (cherry picked from commit 27b001bd5408359aa5dd219bfd900095ed592fe8) * core: remove duplicate dead code (cherry picked from commit bd0213bae00b737b24795bec3c030ebfe476e0d8) --- salt/grains/core.py | 4 +-- tests/unit/grains/test_core.py | 45 ---------------------------------- 2 files changed, 2 insertions(+), 47 deletions(-) diff --git a/salt/grains/core.py b/salt/grains/core.py index fdabe484a8..bf54c54553 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -989,7 +989,7 @@ def _virtual(osdata): except UnicodeDecodeError: # Some firmwares provide non-valid 'product_name' # files, ignore them - pass + log.debug('The content in /sys/devices/virtual/dmi/id/product_name is not valid') except IOError: pass elif osdata['kernel'] == 'FreeBSD': @@ -2490,7 +2490,7 @@ def _hw_data(osdata): except UnicodeDecodeError: # Some firmwares provide non-valid 'product_name' # files, ignore them - pass + log.debug('The content in /sys/devices/virtual/dmi/id/product_name is not valid') except (IOError, OSError) as err: # PermissionError is new to Python 3, but corresponds to the EACESS and # EPERM error numbers. Use those instead here for PY2 compatibility. diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py index aa04a7a7ac..889fb90074 100644 --- a/tests/unit/grains/test_core.py +++ b/tests/unit/grains/test_core.py @@ -1117,51 +1117,6 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin): 'uuid': '' }) - @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') - def test_kernelparams_return(self): - expectations = [ - ('BOOT_IMAGE=/vmlinuz-3.10.0-693.2.2.el7.x86_64', - {'kernelparams': [('BOOT_IMAGE', '/vmlinuz-3.10.0-693.2.2.el7.x86_64')]}), - ('root=/dev/mapper/centos_daemon-root', - {'kernelparams': [('root', '/dev/mapper/centos_daemon-root')]}), - ('rhgb quiet ro', - {'kernelparams': [('rhgb', None), ('quiet', None), ('ro', None)]}), - ('param="value1"', - {'kernelparams': [('param', 'value1')]}), - ('param="value1 value2 value3"', - {'kernelparams': [('param', 'value1 value2 value3')]}), - ('param="value1 value2 value3" LANG="pl" ro', - {'kernelparams': [('param', 'value1 value2 value3'), ('LANG', 'pl'), ('ro', None)]}), - ('ipv6.disable=1', - {'kernelparams': [('ipv6.disable', '1')]}), - ('param="value1:value2:value3"', - {'kernelparams': [('param', 'value1:value2:value3')]}), - ('param="value1,value2,value3"', - {'kernelparams': [('param', 'value1,value2,value3')]}), - ('param="value1" param="value2" param="value3"', - {'kernelparams': [('param', 'value1'), ('param', 'value2'), ('param', 'value3')]}), - ] - - for cmdline, expectation in expectations: - with patch('salt.utils.files.fopen', mock_open(read_data=cmdline)): - self.assertEqual(core.kernelparams(), expectation) - - @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') - @patch('os.path.exists') - @patch('salt.utils.platform.is_proxy') - def test__hw_data_linux_empty(self, is_proxy, exists): - is_proxy.return_value = False - exists.return_value = True - with patch('salt.utils.files.fopen', mock_open(read_data='')): - self.assertEqual(core._hw_data({'kernel': 'Linux'}), { - 'biosreleasedate': '', - 'biosversion': '', - 'manufacturer': '', - 'productname': '', - 'serialnumber': '', - 'uuid': '' - }) - @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') @skipIf(six.PY2, 'UnicodeDecodeError is throw in Python 3') @patch('os.path.exists') -- 2.23.0 ++++++ fix-applying-of-attributes-for-returner-rawfile_json.patch ++++++
From f02df4c2e53a356608025f19ff981ad4455ead12 Mon Sep 17 00:00:00 2001 From: rbthomp <26642445+rbthomp@users.noreply.github.com> Date: Tue, 16 Oct 2018 15:04:56 -0600 Subject: [PATCH] Fix applying of attributes for returner rawfile_json
Arguments are not getting applied to the rawfile_json returner. For example if you specify an alternate filename for the output the default "/var/log/salt/events" is always used. Passing the `ret` to `_get_options(ret) resolve this. --- salt/returners/rawfile_json.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/returners/rawfile_json.py b/salt/returners/rawfile_json.py index d010164360..cf55840a87 100644 --- a/salt/returners/rawfile_json.py +++ b/salt/returners/rawfile_json.py @@ -55,7 +55,7 @@ def returner(ret): ''' Write the return data to a file on the minion. ''' - opts = _get_options({}) # Pass in empty ret, since this is a list of events + opts = _get_options(ret) try: with salt.utils.files.flopen(opts['filename'], 'a') as logfile: salt.utils.json.dump(ret, logfile) -- 2.23.0 ++++++ fix-aptpkg-systemd-call-bsc-1143301.patch ++++++
From f667d6f0534498e2aaa6e46242727bafc13241fd Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Wed, 31 Jul 2019 15:29:03 +0200 Subject: [PATCH] Fix aptpkg systemd call (bsc#1143301)
--- salt/modules/aptpkg.py | 2 +- tests/unit/modules/test_aptpkg.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py index e537f5b007..b7c1a342ef 100644 --- a/salt/modules/aptpkg.py +++ b/salt/modules/aptpkg.py @@ -165,7 +165,7 @@ def _call_apt(args, scope=True, **kwargs): ''' cmd = [] if scope and salt.utils.systemd.has_scope(__context__) and __salt__['config.get']('systemd.scope', True): - cmd.extend(['systemd-run', '--scope']) + cmd.extend(['systemd-run', '--scope', '--description "{0}"'.format(__name__)]) cmd.extend(args) params = {'output_loglevel': 'trace', diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py index 580b840197..06f3a9f6aa 100644 --- a/tests/unit/modules/test_aptpkg.py +++ b/tests/unit/modules/test_aptpkg.py @@ -544,7 +544,7 @@ class AptUtilsTestCase(TestCase, LoaderModuleMockMixin): with patch.dict(aptpkg.__salt__, {'cmd.run_all': MagicMock(), 'config.get': MagicMock(return_value=True)}): aptpkg._call_apt(['apt-get', 'purge', 'vim']) # pylint: disable=W0106 aptpkg.__salt__['cmd.run_all'].assert_called_once_with( - ['systemd-run', '--scope', 'apt-get', 'purge', 'vim'], env={}, + ['systemd-run', '--scope', '--description "salt.modules.aptpkg"', 'apt-get', 'purge', 'vim'], env={}, output_loglevel='trace', python_shell=False) def test_call_apt_with_kwargs(self): -- 2.22.0 ++++++ fix-async-batch-multiple-done-events.patch ++++++
From 2dcee9c2773f588cc5ca040b1d22c1e8036dcbf7 Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Tue, 7 May 2019 12:24:35 +0200 Subject: [PATCH] Fix async-batch multiple done events
--- salt/cli/batch_async.py | 17 ++++++++++++----- tests/unit/cli/test_batch_async.py | 20 +++++++++++++------- 2 files changed, 25 insertions(+), 12 deletions(-) diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py index 9c20b2fc6e..8c8f481e34 100644 --- a/salt/cli/batch_async.py +++ b/salt/cli/batch_async.py @@ -84,6 +84,7 @@ class BatchAsync(object): listen=True, io_loop=ioloop, keep_loop=True) + self.scheduled = False def __set_event_handler(self): ping_return_pattern = 'salt/job/{0}/ret/*'.format(self.ping_jid) @@ -116,8 +117,7 @@ class BatchAsync(object): if minion in self.active: self.active.remove(minion) self.done_minions.add(minion) - # call later so that we maybe gather more returns - self.event.io_loop.call_later(self.batch_delay, self.schedule_next) + self.schedule_next() def _get_next(self): to_run = self.minions.difference( @@ -137,7 +137,7 @@ class BatchAsync(object): self.active = self.active.difference(self.timedout_minions) running = batch_minions.difference(self.done_minions).difference(self.timedout_minions) if timedout_minions: - self.event.io_loop.call_later(self.batch_delay, self.schedule_next) + self.schedule_next() if running: self.event.io_loop.add_callback(self.find_job, running) @@ -189,7 +189,7 @@ class BatchAsync(object): "metadata": self.metadata } self.event.fire_event(data, "salt/batch/{0}/start".format(self.batch_jid)) - yield self.schedule_next() + yield self.run_next() def end_batch(self): left = self.minions.symmetric_difference(self.done_minions.union(self.timedout_minions)) @@ -204,8 +204,14 @@ class BatchAsync(object): self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid)) self.event.remove_event_handler(self.__event_handler) - @tornado.gen.coroutine def schedule_next(self): + if not self.scheduled: + self.scheduled = True + # call later so that we maybe gather more returns + self.event.io_loop.call_later(self.batch_delay, self.run_next) + + @tornado.gen.coroutine + def run_next(self): next_batch = self._get_next() if next_batch: self.active = self.active.union(next_batch) @@ -225,3 +231,4 @@ class BatchAsync(object): self.active = self.active.difference(next_batch) else: self.end_batch() + self.scheduled = False diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py index d519157d92..441f9c58b9 100644 --- a/tests/unit/cli/test_batch_async.py +++ b/tests/unit/cli/test_batch_async.py @@ -111,14 +111,14 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): @tornado.testing.gen_test def test_start_batch_calls_next(self): - self.batch.schedule_next = MagicMock(return_value=MagicMock()) + self.batch.run_next = MagicMock(return_value=MagicMock()) self.batch.event = MagicMock() future = tornado.gen.Future() future.set_result(None) - self.batch.schedule_next = MagicMock(return_value=future) + self.batch.run_next = MagicMock(return_value=future) self.batch.start_batch() self.assertEqual(self.batch.initialized, True) - self.assertEqual(len(self.batch.schedule_next.mock_calls), 1) + self.assertEqual(len(self.batch.run_next.mock_calls), 1) def test_batch_fire_done_event(self): self.batch.targeted_minions = {'foo', 'baz', 'bar'} @@ -154,7 +154,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): future = tornado.gen.Future() future.set_result({'minions': ['foo', 'bar']}) self.batch.local.run_job_async.return_value = future - ret = self.batch.schedule_next().result() + ret = self.batch.run_next().result() self.assertEqual( self.batch.local.run_job_async.call_args[0], ({'foo', 'bar'}, 'my.fun', [], 'list') @@ -253,7 +253,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.assertEqual(self.batch.done_minions, {'foo'}) self.assertEqual( self.batch.event.io_loop.call_later.call_args[0], - (self.batch.batch_delay, self.batch.schedule_next)) + (self.batch.batch_delay, self.batch.run_next)) def test_batch__event_handler_find_job_return(self): self.batch.event = MagicMock( @@ -263,10 +263,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.assertEqual(self.batch.find_job_returned, {'foo'}) @tornado.testing.gen_test - def test_batch_schedule_next_end_batch_when_no_next(self): + def test_batch_run_next_end_batch_when_no_next(self): self.batch.end_batch = MagicMock() self.batch._get_next = MagicMock(return_value={}) - self.batch.schedule_next() + self.batch.run_next() self.assertEqual(len(self.batch.end_batch.mock_calls), 1) @tornado.testing.gen_test @@ -342,3 +342,9 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.batch.event.io_loop.add_callback.call_args[0], (self.batch.find_job, {'foo'}) ) + + def test_only_on_run_next_is_scheduled(self): + self.batch.event = MagicMock() + self.batch.scheduled = True + self.batch.schedule_next() + self.assertEqual(len(self.batch.event.io_loop.call_later.mock_calls), 0) -- 2.21.0 ++++++ fix-async-batch-race-conditions.patch ++++++
From 33c5e10c2912f584243d29c764c2c6cca86edf4a Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Thu, 11 Apr 2019 15:57:59 +0200 Subject: [PATCH] Fix async batch race conditions
Close batching when there is no next batch --- salt/cli/batch_async.py | 80 +++++++++++++++--------------- tests/unit/cli/test_batch_async.py | 35 ++++++------- 2 files changed, 54 insertions(+), 61 deletions(-) diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py index 3160d46d8b..9c20b2fc6e 100644 --- a/salt/cli/batch_async.py +++ b/salt/cli/batch_async.py @@ -37,14 +37,14 @@ class BatchAsync(object): - tag: salt/batch/<batch-jid>/start - data: { "available_minions": self.minions, - "down_minions": self.down_minions + "down_minions": targeted_minions - presence_ping_minions } When the batch ends, an `done` event is fired: - tag: salt/batch/<batch-jid>/done - data: { "available_minions": self.minions, - "down_minions": self.down_minions, + "down_minions": targeted_minions - presence_ping_minions "done_minions": self.done_minions, "timedout_minions": self.timedout_minions } @@ -67,7 +67,7 @@ class BatchAsync(object): self.eauth = batch_get_eauth(clear_load['kwargs']) self.metadata = clear_load['kwargs'].get('metadata', {}) self.minions = set() - self.down_minions = set() + self.targeted_minions = set() self.timedout_minions = set() self.done_minions = set() self.active = set() @@ -108,8 +108,7 @@ class BatchAsync(object): minion = data['id'] if op == 'ping_return': self.minions.add(minion) - self.down_minions.remove(minion) - if not self.down_minions: + if self.targeted_minions == self.minions: self.event.io_loop.spawn_callback(self.start_batch) elif op == 'find_job_return': self.find_job_returned.add(minion) @@ -120,9 +119,6 @@ class BatchAsync(object): # call later so that we maybe gather more returns self.event.io_loop.call_later(self.batch_delay, self.schedule_next) - if self.initialized and self.done_minions == self.minions.difference(self.timedout_minions): - self.end_batch() - def _get_next(self): to_run = self.minions.difference( self.done_minions).difference( @@ -135,16 +131,13 @@ class BatchAsync(object): return set(list(to_run)[:next_batch_size]) @tornado.gen.coroutine - def check_find_job(self, minions): - did_not_return = minions.difference(self.find_job_returned) - if did_not_return: - for minion in did_not_return: - if minion in self.find_job_returned: - self.find_job_returned.remove(minion) - if minion in self.active: - self.active.remove(minion) - self.timedout_minions.add(minion) - running = minions.difference(did_not_return).difference(self.done_minions).difference(self.timedout_minions) + def check_find_job(self, batch_minions): + timedout_minions = batch_minions.difference(self.find_job_returned).difference(self.done_minions) + self.timedout_minions = self.timedout_minions.union(timedout_minions) + self.active = self.active.difference(self.timedout_minions) + running = batch_minions.difference(self.done_minions).difference(self.timedout_minions) + if timedout_minions: + self.event.io_loop.call_later(self.batch_delay, self.schedule_next) if running: self.event.io_loop.add_callback(self.find_job, running) @@ -183,7 +176,7 @@ class BatchAsync(object): jid=self.ping_jid, metadata=self.metadata, **self.eauth) - self.down_minions = set(ping_return['minions']) + self.targeted_minions = set(ping_return['minions']) @tornado.gen.coroutine def start_batch(self): @@ -192,36 +185,43 @@ class BatchAsync(object): self.initialized = True data = { "available_minions": self.minions, - "down_minions": self.down_minions, + "down_minions": self.targeted_minions.difference(self.minions), "metadata": self.metadata } self.event.fire_event(data, "salt/batch/{0}/start".format(self.batch_jid)) yield self.schedule_next() def end_batch(self): - data = { - "available_minions": self.minions, - "down_minions": self.down_minions, - "done_minions": self.done_minions, - "timedout_minions": self.timedout_minions, - "metadata": self.metadata - } - self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid)) - self.event.remove_event_handler(self.__event_handler) + left = self.minions.symmetric_difference(self.done_minions.union(self.timedout_minions)) + if not left: + data = { + "available_minions": self.minions, + "down_minions": self.targeted_minions.difference(self.minions), + "done_minions": self.done_minions, + "timedout_minions": self.timedout_minions, + "metadata": self.metadata + } + self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid)) + self.event.remove_event_handler(self.__event_handler) @tornado.gen.coroutine def schedule_next(self): next_batch = self._get_next() if next_batch: - yield self.local.run_job_async( - next_batch, - self.opts['fun'], - self.opts['arg'], - 'list', - raw=self.opts.get('raw', False), - ret=self.opts.get('return', ''), - gather_job_timeout=self.opts['gather_job_timeout'], - jid=self.batch_jid, - metadata=self.metadata) - self.event.io_loop.call_later(self.opts['timeout'], self.find_job, set(next_batch)) self.active = self.active.union(next_batch) + try: + yield self.local.run_job_async( + next_batch, + self.opts['fun'], + self.opts['arg'], + 'list', + raw=self.opts.get('raw', False), + ret=self.opts.get('return', ''), + gather_job_timeout=self.opts['gather_job_timeout'], + jid=self.batch_jid, + metadata=self.metadata) + self.event.io_loop.call_later(self.opts['timeout'], self.find_job, set(next_batch)) + except Exception as ex: + self.active = self.active.difference(next_batch) + else: + self.end_batch() diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py index f65b6a06c3..d519157d92 100644 --- a/tests/unit/cli/test_batch_async.py +++ b/tests/unit/cli/test_batch_async.py @@ -75,8 +75,8 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.batch.local.run_job_async.call_args[0], ('*', 'test.ping', [], 'glob') ) - # assert down_minions == all minions matched by tgt - self.assertEqual(self.batch.down_minions, set(['foo', 'bar'])) + # assert targeted_minions == all minions matched by tgt + self.assertEqual(self.batch.targeted_minions, set(['foo', 'bar'])) @tornado.testing.gen_test def test_batch_start_on_gather_job_timeout(self): @@ -121,7 +121,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.assertEqual(len(self.batch.schedule_next.mock_calls), 1) def test_batch_fire_done_event(self): + self.batch.targeted_minions = {'foo', 'baz', 'bar'} self.batch.minions = set(['foo', 'bar']) + self.batch.done_minions = {'foo'} + self.batch.timedout_minions = {'bar'} self.batch.event = MagicMock() self.batch.metadata = {'mykey': 'myvalue'} self.batch.end_batch() @@ -130,9 +133,9 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): ( { 'available_minions': set(['foo', 'bar']), - 'done_minions': set(), - 'down_minions': set(), - 'timedout_minions': set(), + 'done_minions': self.batch.done_minions, + 'down_minions': {'baz'}, + 'timedout_minions': self.batch.timedout_minions, 'metadata': self.batch.metadata }, "salt/batch/1235/done" @@ -212,7 +215,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.assertEqual(self.batch._get_next(), set()) def test_batch__event_handler_ping_return(self): - self.batch.down_minions = {'foo'} + self.batch.targeted_minions = {'foo'} self.batch.event = MagicMock( unpack=MagicMock(return_value=('salt/job/1234/ret/foo', {'id': 'foo'}))) self.batch.start() @@ -222,7 +225,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.assertEqual(self.batch.done_minions, set()) def test_batch__event_handler_call_start_batch_when_all_pings_return(self): - self.batch.down_minions = {'foo'} + self.batch.targeted_minions = {'foo'} self.batch.event = MagicMock( unpack=MagicMock(return_value=('salt/job/1234/ret/foo', {'id': 'foo'}))) self.batch.start() @@ -232,7 +235,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): (self.batch.start_batch,)) def test_batch__event_handler_not_call_start_batch_when_not_all_pings_return(self): - self.batch.down_minions = {'foo', 'bar'} + self.batch.targeted_minions = {'foo', 'bar'} self.batch.event = MagicMock( unpack=MagicMock(return_value=('salt/job/1234/ret/foo', {'id': 'foo'}))) self.batch.start() @@ -260,20 +263,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.assertEqual(self.batch.find_job_returned, {'foo'}) @tornado.testing.gen_test - def test_batch__event_handler_end_batch(self): - self.batch.event = MagicMock( - unpack=MagicMock(return_value=('salt/job/not-my-jid/ret/foo', {'id': 'foo'}))) - future = tornado.gen.Future() - future.set_result({'minions': ['foo', 'bar', 'baz']}) - self.batch.local.run_job_async.return_value = future - self.batch.start() - self.batch.initialized = True - self.assertEqual(self.batch.down_minions, {'foo', 'bar', 'baz'}) + def test_batch_schedule_next_end_batch_when_no_next(self): self.batch.end_batch = MagicMock() - self.batch.minions = {'foo', 'bar', 'baz'} - self.batch.done_minions = {'foo', 'bar'} - self.batch.timedout_minions = {'baz'} - self.batch._BatchAsync__event_handler(MagicMock()) + self.batch._get_next = MagicMock(return_value={}) + self.batch.schedule_next() self.assertEqual(len(self.batch.end_batch.mock_calls), 1) @tornado.testing.gen_test -- 2.20.1 ++++++ fix-batch_async-obsolete-test.patch ++++++
From e2950f4178f466e64ed5d3e748db001a73ab4b2a Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Tue, 3 Dec 2019 11:22:42 +0100 Subject: [PATCH] Fix batch_async obsolete test
--- tests/unit/cli/test_batch_async.py | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py index 12dfe543bc..f1d36a81fb 100644 --- a/tests/unit/cli/test_batch_async.py +++ b/tests/unit/cli/test_batch_async.py @@ -140,8 +140,14 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): "salt/batch/1235/done" ) ) + + def test_batch__del__(self): + batch = BatchAsync(MagicMock(), MagicMock(), MagicMock()) + event = MagicMock() + batch.event = event + batch.__del__() self.assertEqual( - len(self.batch.event.remove_event_handler.mock_calls), 1) + len(event.remove_event_handler.mock_calls), 1) @tornado.testing.gen_test def test_batch_next(self): -- 2.23.0 ++++++ fix-bsc-1065792.patch ++++++
From 30a4053231cf67f486ca1f430dce563f7247d963 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Thu, 14 Dec 2017 16:21:40 +0100 Subject: [PATCH] Fix bsc#1065792
--- salt/states/service.py | 1 + 1 file changed, 1 insertion(+) diff --git a/salt/states/service.py b/salt/states/service.py index c5bf3f2d54..a5ec426ec4 100644 --- a/salt/states/service.py +++ b/salt/states/service.py @@ -80,6 +80,7 @@ def __virtual__(): Only make these states available if a service provider has been detected or assigned for this minion ''' + __salt__._load_all() if 'service.start' in __salt__: return __virtualname__ else: -- 2.13.7 ++++++ fix-failing-unit-tests-for-batch-async.patch ++++++
From 8378bb24a5a53973e8dba7658b8b3465d967329f Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Fri, 4 Oct 2019 15:00:50 +0100 Subject: [PATCH] Fix failing unit tests for batch async
--- salt/cli/batch_async.py | 2 +- tests/unit/cli/test_batch_async.py | 57 +++++++++++++++++------------- 2 files changed, 34 insertions(+), 25 deletions(-) diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py index f9e736f804..6d0dca1da5 100644 --- a/salt/cli/batch_async.py +++ b/salt/cli/batch_async.py @@ -88,7 +88,7 @@ class BatchAsync(object): io_loop=ioloop, keep_loop=True) self.scheduled = False - self.patterns = {} + self.patterns = set() def __set_event_handler(self): ping_return_pattern = 'salt/job/{0}/ret/*'.format(self.ping_jid) diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py index 441f9c58b9..12dfe543bc 100644 --- a/tests/unit/cli/test_batch_async.py +++ b/tests/unit/cli/test_batch_async.py @@ -68,8 +68,8 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): ret = self.batch.start() # assert start_batch is called later with batch_presence_ping_timeout as param self.assertEqual( - self.batch.event.io_loop.call_later.call_args[0], - (self.batch.batch_presence_ping_timeout, self.batch.start_batch)) + self.batch.event.io_loop.spawn_callback.call_args[0], + (self.batch.start_batch,)) # assert test.ping called self.assertEqual( self.batch.local.run_job_async.call_args[0], @@ -88,8 +88,8 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): ret = self.batch.start() # assert start_batch is called later with gather_job_timeout as param self.assertEqual( - self.batch.event.io_loop.call_later.call_args[0], - (self.batch.opts['gather_job_timeout'], self.batch.start_batch)) + self.batch.event.io_loop.spawn_callback.call_args[0], + (self.batch.start_batch,)) def test_batch_fire_start_event(self): self.batch.minions = set(['foo', 'bar']) @@ -113,12 +113,11 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): def test_start_batch_calls_next(self): self.batch.run_next = MagicMock(return_value=MagicMock()) self.batch.event = MagicMock() - future = tornado.gen.Future() - future.set_result(None) - self.batch.run_next = MagicMock(return_value=future) self.batch.start_batch() self.assertEqual(self.batch.initialized, True) - self.assertEqual(len(self.batch.run_next.mock_calls), 1) + self.assertEqual( + self.batch.event.io_loop.spawn_callback.call_args[0], + (self.batch.run_next,)) def test_batch_fire_done_event(self): self.batch.targeted_minions = {'foo', 'baz', 'bar'} @@ -154,14 +153,14 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): future = tornado.gen.Future() future.set_result({'minions': ['foo', 'bar']}) self.batch.local.run_job_async.return_value = future - ret = self.batch.run_next().result() + self.batch.run_next() self.assertEqual( self.batch.local.run_job_async.call_args[0], ({'foo', 'bar'}, 'my.fun', [], 'list') ) self.assertEqual( - self.batch.event.io_loop.call_later.call_args[0], - (self.batch.opts['timeout'], self.batch.find_job, {'foo', 'bar'}) + self.batch.event.io_loop.spawn_callback.call_args[0], + (self.batch.find_job, {'foo', 'bar'}) ) self.assertEqual(self.batch.active, {'bar', 'foo'}) @@ -252,13 +251,14 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.assertEqual(self.batch.active, set()) self.assertEqual(self.batch.done_minions, {'foo'}) self.assertEqual( - self.batch.event.io_loop.call_later.call_args[0], - (self.batch.batch_delay, self.batch.run_next)) + self.batch.event.io_loop.spawn_callback.call_args[0], + (self.batch.schedule_next,)) def test_batch__event_handler_find_job_return(self): self.batch.event = MagicMock( - unpack=MagicMock(return_value=('salt/job/1236/ret/foo', {'id': 'foo'}))) + unpack=MagicMock(return_value=('salt/job/1236/ret/foo', {'id': 'foo', 'return': 'deadbeaf'}))) self.batch.start() + self.batch.patterns.add(('salt/job/1236/ret/*', 'find_job_return')) self.batch._BatchAsync__event_handler(MagicMock()) self.assertEqual(self.batch.find_job_returned, {'foo'}) @@ -275,10 +275,13 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): future = tornado.gen.Future() future.set_result({}) self.batch.local.run_job_async.return_value = future + self.batch.minions = set(['foo', 'bar']) + self.batch.jid_gen = MagicMock(return_value="1234") + tornado.gen.sleep = MagicMock(return_value=future) self.batch.find_job({'foo', 'bar'}) self.assertEqual( - self.batch.event.io_loop.call_later.call_args[0], - (self.batch.opts['gather_job_timeout'], self.batch.check_find_job, {'foo', 'bar'}) + self.batch.event.io_loop.spawn_callback.call_args[0], + (self.batch.check_find_job, {'foo', 'bar'}, "1234") ) @tornado.testing.gen_test @@ -288,17 +291,21 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): future = tornado.gen.Future() future.set_result({}) self.batch.local.run_job_async.return_value = future + self.batch.minions = set(['foo', 'bar']) + self.batch.jid_gen = MagicMock(return_value="1234") + tornado.gen.sleep = MagicMock(return_value=future) self.batch.find_job({'foo', 'bar'}) self.assertEqual( - self.batch.event.io_loop.call_later.call_args[0], - (self.batch.opts['gather_job_timeout'], self.batch.check_find_job, {'foo'}) + self.batch.event.io_loop.spawn_callback.call_args[0], + (self.batch.check_find_job, {'foo'}, "1234") ) def test_batch_check_find_job_did_not_return(self): self.batch.event = MagicMock() self.batch.active = {'foo'} self.batch.find_job_returned = set() - self.batch.check_find_job({'foo'}) + self.batch.patterns = { ('salt/job/1234/ret/*', 'find_job_return') } + self.batch.check_find_job({'foo'}, jid="1234") self.assertEqual(self.batch.find_job_returned, set()) self.assertEqual(self.batch.active, set()) self.assertEqual(len(self.batch.event.io_loop.add_callback.mock_calls), 0) @@ -306,9 +313,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): def test_batch_check_find_job_did_return(self): self.batch.event = MagicMock() self.batch.find_job_returned = {'foo'} - self.batch.check_find_job({'foo'}) + self.batch.patterns = { ('salt/job/1234/ret/*', 'find_job_return') } + self.batch.check_find_job({'foo'}, jid="1234") self.assertEqual( - self.batch.event.io_loop.add_callback.call_args[0], + self.batch.event.io_loop.spawn_callback.call_args[0], (self.batch.find_job, {'foo'}) ) @@ -329,7 +337,8 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): # both not yet done but only 'foo' responded to find_job not_done = {'foo', 'bar'} - self.batch.check_find_job(not_done) + self.batch.patterns = { ('salt/job/1234/ret/*', 'find_job_return') } + self.batch.check_find_job(not_done, jid="1234") # assert 'bar' removed from active self.assertEqual(self.batch.active, {'foo'}) @@ -339,7 +348,7 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): # assert 'find_job' schedueled again only for 'foo' self.assertEqual( - self.batch.event.io_loop.add_callback.call_args[0], + self.batch.event.io_loop.spawn_callback.call_args[0], (self.batch.find_job, {'foo'}) ) @@ -347,4 +356,4 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.batch.event = MagicMock() self.batch.scheduled = True self.batch.schedule_next() - self.assertEqual(len(self.batch.event.io_loop.call_later.mock_calls), 0) + self.assertEqual(len(self.batch.event.io_loop.spawn_callback.mock_calls), 0) -- 2.22.0 ++++++ fix-for-log-checking-in-x509-test.patch ++++++
From 0c5498a9b8f0917740c17d456416de3597dc1fab Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Thu, 28 Nov 2019 15:23:36 +0100 Subject: [PATCH] Fix for log checking in x509 test
We are logging in debug and not in trace mode here. --- tests/unit/modules/test_x509.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tests/unit/modules/test_x509.py b/tests/unit/modules/test_x509.py index 7030f96484..143abe82bf 100644 --- a/tests/unit/modules/test_x509.py +++ b/tests/unit/modules/test_x509.py @@ -71,9 +71,9 @@ class X509TestCase(TestCase, LoaderModuleMockMixin): subj = FakeSubject() x509._parse_subject(subj) - assert x509.log.trace.call_args[0][0] == "Missing attribute '%s'. Error: %s" - assert x509.log.trace.call_args[0][1] == list(subj.nid.keys())[0] - assert isinstance(x509.log.trace.call_args[0][2], TypeError) + assert x509.log.debug.call_args[0][0] == "Missing attribute '%s'. Error: %s" + assert x509.log.debug.call_args[0][1] == list(subj.nid.keys())[0] + assert isinstance(x509.log.debug.call_args[0][2], TypeError) @skipIf(not HAS_M2CRYPTO, 'Skipping, M2Crypto is unavailble') def test_get_pem_entry(self): -- 2.23.0 ++++++ fix-for-older-mock-module.patch ++++++
From 7e4c53ab89927b6b700603a74131da318c93b957 Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Fri, 25 Oct 2019 16:18:58 +0200 Subject: [PATCH] Fix for older mock module
Seems like args is not working with older mock modules. --- tests/unit/modules/test_aptpkg.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py index d3fac5902a..bc6b610d86 100644 --- a/tests/unit/modules/test_aptpkg.py +++ b/tests/unit/modules/test_aptpkg.py @@ -412,15 +412,15 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin): } with patch.multiple(aptpkg, **patch_kwargs): aptpkg.upgrade() - args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args.args if "--download-only" in args] + args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args[0] if "--download-only" in args] self.assertFalse(any(args_matching)) aptpkg.upgrade(downloadonly=True) - args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args.args if "--download-only" in args] + args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args[0] if "--download-only" in args] self.assertTrue(any(args_matching)) aptpkg.upgrade(download_only=True) - args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args.args if "--download-only" in args] + args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args[0] if "--download-only" in args] self.assertTrue(any(args_matching)) def test_show(self): -- 2.16.4 ++++++ fix-for-suse-expanded-support-detection.patch ++++++
From 616750ad4b2b2b8d55d19b81500dbd4f0aba1f74 Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Thu, 6 Sep 2018 17:15:18 +0200 Subject: [PATCH] Fix for SUSE Expanded Support detection
A SUSE ES installation has both, the centos-release and redhat-release file. Since os_data only used the centos-release file to detect a CentOS installation, this lead to SUSE ES being detected as CentOS. This change also adds a check for redhat-release and then marks the 'lsb_distrib_id' as RedHat. --- salt/grains/core.py | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/salt/grains/core.py b/salt/grains/core.py index f0f1bd17c4..b2c1d475b0 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -1821,6 +1821,15 @@ def os_data(): log.trace('Parsing distrib info from /etc/centos-release') # CentOS Linux grains['lsb_distrib_id'] = 'CentOS' + # Maybe CentOS Linux; could also be SUSE Expanded Support. + # SUSE ES has both, centos-release and redhat-release. + if os.path.isfile('/etc/redhat-release'): + with salt.utils.files.fopen('/etc/redhat-release') as ifile: + for line in ifile: + if "red hat enterprise linux server" in line.lower(): + # This is a SUSE Expanded Support Rhel installation + grains['lsb_distrib_id'] = 'RedHat' + break with salt.utils.files.fopen('/etc/centos-release') as ifile: for line in ifile: # Need to pull out the version and codename -- 2.17.1 ++++++ fix-git_pillar-merging-across-multiple-__env__-repos.patch ++++++
From 6747243babde058762428f9bdb0e3ef16402eadd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Tue, 6 Nov 2018 16:38:54 +0000 Subject: [PATCH] Fix git_pillar merging across multiple __env__ repositories (bsc#1112874)
Resolve target branch when using __env__ Test git ext_pillar across multiple repos using __env__ Remove unicode references --- tests/integration/pillar/test_git_pillar.py | 45 +++++++++++++++++++++ 1 file changed, 45 insertions(+) diff --git a/tests/integration/pillar/test_git_pillar.py b/tests/integration/pillar/test_git_pillar.py index 5d9a374f6e..4a9553d1a1 100644 --- a/tests/integration/pillar/test_git_pillar.py +++ b/tests/integration/pillar/test_git_pillar.py @@ -1361,6 +1361,51 @@ class TestPygit2SSH(GitPillarSSHTestBase): 'nested_dict': {'master': True}}} ) + +@skipIf(NO_MOCK, NO_MOCK_REASON) +@skipIf(_windows_or_mac(), 'minion is windows or mac') +@skip_if_not_root +@skipIf(not HAS_PYGIT2, 'pygit2 >= {0} and libgit2 >= {1} required'.format(PYGIT2_MINVER, LIBGIT2_MINVER)) +@skipIf(not HAS_NGINX, 'nginx not present') +@skipIf(not HAS_VIRTUALENV, 'virtualenv not present') +class TestPygit2HTTP(GitPillarHTTPTestBase): + ''' + Test git_pillar with pygit2 using SSH authentication + ''' + def test_single_source(self): + ''' + Test with git_pillar_includes enabled and using "__env__" as the branch + name for the configured repositories. + The "gitinfo" repository contains top.sls file with a local reference + and also referencing external "nowhere.foo" which is provided by "webinfo" + repository mounted as "nowhere". + ''' + ret = self.get_pillar('''\ + file_ignore_regex: [] + file_ignore_glob: [] + git_pillar_provider: pygit2 + git_pillar_pubkey: {pubkey_nopass} + git_pillar_privkey: {privkey_nopass} + cachedir: {cachedir} + extension_modules: {extmods} + ext_pillar: + - git: + - __env__ {url_extra_repo}: + - name: gitinfo + - __env__ {url}: + - name: webinfo + - mountpoint: nowhere + ''') + self.assertEqual( + ret, + {'branch': 'master', + 'motd': 'The force will be with you. Always.', + 'mylist': ['master'], + 'mydict': {'master': True, + 'nested_list': ['master'], + 'nested_dict': {'master': True}}} + ) + @requires_system_grains def test_root_parameter(self, grains): ''' -- 2.17.1 ++++++ fix-ipv6-scope-bsc-1108557.patch ++++++
From b6d47a2ca7f1bed902dfc6574e6fe91d3034aa29 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Fri, 28 Sep 2018 15:22:33 +0200 Subject: [PATCH] Fix IPv6 scope (bsc#1108557)
Fix ipaddress imports Remove unused import Fix ipaddress import Fix unicode imports in compat Override standard IPv6Address class Check version via object Isolate Py2 and Py3 mode Add logging Add debugging to the ip_address method (py2 and py3) Remove multiple returns and add check for address syntax Remove unnecessary variable for import detection Remove duplicated code Remove unnecessary operator Remove multiple returns Use ternary operator instead Remove duplicated code Move docstrings to their native places Add real exception message Add logging to the ip_interface Add scope on str Lintfix: mute not called constructors Add extra detection for hexadecimal packed bytes on Python2. This cannot be detected with type comparison, because bytes == str and at the same time bytes != str if compatibility is not around Fix py2 case where the same class cannot initialise itself on Python2 via super. Simplify checking clause Do not use introspection for method swap Fix wrong type swap Add Py3.4 old implementation's fix Lintfix Lintfix refactor: remove duplicate returns as not needed Revert method remapping with pylint updates Remove unnecessary manipulation with IPv6 scope outside of the IPv6Address object instance Lintfix: W0611 Reverse skipping tests: if no ipaddress --- salt/_compat.py | 74 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 74 insertions(+) diff --git a/salt/_compat.py b/salt/_compat.py index c10b82c0c2..8628833dcf 100644 --- a/salt/_compat.py +++ b/salt/_compat.py @@ -229,7 +229,81 @@ class IPv6InterfaceScoped(ipaddress.IPv6Interface, IPv6AddressScoped): self.hostmask = self.network.hostmask +def ip_address(address): + """Take an IP string/int and return an object of the correct type. + + Args: + address: A string or integer, the IP address. Either IPv4 or + IPv6 addresses may be supplied; integers less than 2**32 will + be considered to be IPv4 by default. + + Returns: + An IPv4Address or IPv6Address object. + + Raises: + ValueError: if the *address* passed isn't either a v4 or a v6 + address + + """ + try: + return ipaddress.IPv4Address(address) + except (ipaddress.AddressValueError, ipaddress.NetmaskValueError) as err: + log.debug('Error while parsing IPv4 address: %s', address) + log.debug(err) + + try: + return IPv6AddressScoped(address) + except (ipaddress.AddressValueError, ipaddress.NetmaskValueError) as err: + log.debug('Error while parsing IPv6 address: %s', address) + log.debug(err) + + if isinstance(address, bytes): + raise ipaddress.AddressValueError('{} does not appear to be an IPv4 or IPv6 address. ' + 'Did you pass in a bytes (str in Python 2) instead ' + 'of a unicode object?'.format(repr(address))) + + raise ValueError('{} does not appear to be an IPv4 or IPv6 address'.format(repr(address))) + + +def ip_interface(address): + """Take an IP string/int and return an object of the correct type. + + Args: + address: A string or integer, the IP address. Either IPv4 or + IPv6 addresses may be supplied; integers less than 2**32 will + be considered to be IPv4 by default. + + Returns: + An IPv4Interface or IPv6Interface object. + + Raises: + ValueError: if the string passed isn't either a v4 or a v6 + address. + + Notes: + The IPv?Interface classes describe an Address on a particular + Network, so they're basically a combination of both the Address + and Network classes. + + """ + try: + return ipaddress.IPv4Interface(address) + except (ipaddress.AddressValueError, ipaddress.NetmaskValueError) as err: + log.debug('Error while getting IPv4 interface for address %s', address) + log.debug(err) + + try: + return ipaddress.IPv6Interface(address) + except (ipaddress.AddressValueError, ipaddress.NetmaskValueError) as err: + log.debug('Error while getting IPv6 interface for address %s', address) + log.debug(err) + + raise ValueError('{} does not appear to be an IPv4 or IPv6 interface'.format(address)) + + if ipaddress: ipaddress.IPv6Address = IPv6AddressScoped if sys.version_info.major == 2: ipaddress.IPv6Interface = IPv6InterfaceScoped + ipaddress.ip_address = ip_address + ipaddress.ip_interface = ip_interface -- 2.20.1 ++++++ fix-issue-2068-test.patch ++++++
From 3be2bb0043f15af468f1db33b1aa1cc6f2e5797d Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Wed, 9 Jan 2019 16:08:19 +0100 Subject: [PATCH] Fix issue #2068 test
Skip injecting `__call__` if chunk is not dict. This also fixes `integration/modules/test_state.py:StateModuleTest.test_exclude` that tests `include` and `exclude` state directives containing the only list of strings. Minor update: more correct is-dict check. --- salt/state.py | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/salt/state.py b/salt/state.py index 91985c8edc..01ec1faf8b 100644 --- a/salt/state.py +++ b/salt/state.py @@ -25,6 +25,7 @@ import traceback import re import time import random +import collections # Import salt libs import salt.loader @@ -2776,16 +2777,18 @@ class State(object): ''' for chunk in high: state = high[chunk] + if not isinstance(state, collections.Mapping): + continue for state_ref in state: needs_default = True + if not isinstance(state[state_ref], list): + continue for argset in state[state_ref]: if isinstance(argset, six.string_types): needs_default = False break if needs_default: - order = state[state_ref].pop(-1) - state[state_ref].append('__call__') - state[state_ref].append(order) + state[state_ref].insert(-1, '__call__') def call_high(self, high, orchestration_jid=None): ''' -- 2.20.1 ++++++ fix-memory-leak-produced-by-batch-async-find_jobs-me.patch ++++++
From 8941de5a64b6330c6a814059e6e337f7ad3aa6cd Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Mon, 16 Sep 2019 11:27:30 +0200 Subject: [PATCH] Fix memory leak produced by batch async find_jobs mechanism (bsc#1140912) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit
Multiple fixes: - use different JIDs per find_job - fix bug in detection of find_job returns - fix timeout passed from request payload - better cleanup at the end of batching Co-authored-by: Pablo Suárez Hernández <psuarezhernandez@suse.com> --- salt/cli/batch_async.py | 60 +++++++++++++++++++++++++++-------------- salt/client/__init__.py | 1 + salt/master.py | 1 - 3 files changed, 41 insertions(+), 21 deletions(-) diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py index 8c8f481e34..8a67331102 100644 --- a/salt/cli/batch_async.py +++ b/salt/cli/batch_async.py @@ -72,6 +72,7 @@ class BatchAsync(object): self.done_minions = set() self.active = set() self.initialized = False + self.jid_gen = jid_gen self.ping_jid = jid_gen() self.batch_jid = jid_gen() self.find_job_jid = jid_gen() @@ -89,14 +90,11 @@ class BatchAsync(object): def __set_event_handler(self): ping_return_pattern = 'salt/job/{0}/ret/*'.format(self.ping_jid) batch_return_pattern = 'salt/job/{0}/ret/*'.format(self.batch_jid) - find_job_return_pattern = 'salt/job/{0}/ret/*'.format(self.find_job_jid) self.event.subscribe(ping_return_pattern, match_type='glob') self.event.subscribe(batch_return_pattern, match_type='glob') - self.event.subscribe(find_job_return_pattern, match_type='glob') - self.event.patterns = { + self.patterns = { (ping_return_pattern, 'ping_return'), (batch_return_pattern, 'batch_run'), - (find_job_return_pattern, 'find_job_return') } self.event.set_event_handler(self.__event_handler) @@ -104,7 +102,7 @@ class BatchAsync(object): if not self.event: return mtag, data = self.event.unpack(raw, self.event.serial) - for (pattern, op) in self.event.patterns: + for (pattern, op) in self.patterns: if fnmatch.fnmatch(mtag, pattern): minion = data['id'] if op == 'ping_return': @@ -112,7 +110,8 @@ class BatchAsync(object): if self.targeted_minions == self.minions: self.event.io_loop.spawn_callback(self.start_batch) elif op == 'find_job_return': - self.find_job_returned.add(minion) + if data.get("return", None): + self.find_job_returned.add(minion) elif op == 'batch_run': if minion in self.active: self.active.remove(minion) @@ -131,31 +130,46 @@ class BatchAsync(object): return set(list(to_run)[:next_batch_size]) @tornado.gen.coroutine - def check_find_job(self, batch_minions): + def check_find_job(self, batch_minions, jid): + find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid) + self.event.unsubscribe(find_job_return_pattern, match_type='glob') + self.patterns.remove((find_job_return_pattern, "find_job_return")) + timedout_minions = batch_minions.difference(self.find_job_returned).difference(self.done_minions) self.timedout_minions = self.timedout_minions.union(timedout_minions) self.active = self.active.difference(self.timedout_minions) running = batch_minions.difference(self.done_minions).difference(self.timedout_minions) + if timedout_minions: self.schedule_next() + if running: + self.find_job_returned = self.find_job_returned.difference(running) self.event.io_loop.add_callback(self.find_job, running) @tornado.gen.coroutine def find_job(self, minions): - not_done = minions.difference(self.done_minions) - ping_return = yield self.local.run_job_async( - not_done, - 'saltutil.find_job', - [self.batch_jid], - 'list', - gather_job_timeout=self.opts['gather_job_timeout'], - jid=self.find_job_jid, - **self.eauth) - self.event.io_loop.call_later( - self.opts['gather_job_timeout'], - self.check_find_job, - not_done) + not_done = minions.difference(self.done_minions).difference(self.timedout_minions) + + if not_done: + jid = self.jid_gen() + find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid) + self.patterns.add((find_job_return_pattern, "find_job_return")) + self.event.subscribe(find_job_return_pattern, match_type='glob') + + ret = yield self.local.run_job_async( + not_done, + 'saltutil.find_job', + [self.batch_jid], + 'list', + gather_job_timeout=self.opts['gather_job_timeout'], + jid=jid, + **self.eauth) + self.event.io_loop.call_later( + self.opts['gather_job_timeout'], + self.check_find_job, + not_done, + jid) @tornado.gen.coroutine def start(self): @@ -203,6 +217,9 @@ class BatchAsync(object): } self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid)) self.event.remove_event_handler(self.__event_handler) + for (pattern, label) in self.patterns: + if label in ["ping_return", "batch_run"]: + self.event.unsubscribe(pattern, match_type='glob') def schedule_next(self): if not self.scheduled: @@ -226,9 +243,12 @@ class BatchAsync(object): gather_job_timeout=self.opts['gather_job_timeout'], jid=self.batch_jid, metadata=self.metadata) + self.event.io_loop.call_later(self.opts['timeout'], self.find_job, set(next_batch)) except Exception as ex: + log.error("Error in scheduling next batch: %s", ex) self.active = self.active.difference(next_batch) else: self.end_batch() self.scheduled = False + yield diff --git a/salt/client/__init__.py b/salt/client/__init__.py index aff354a021..0bb6d2b111 100644 --- a/salt/client/__init__.py +++ b/salt/client/__init__.py @@ -1624,6 +1624,7 @@ class LocalClient(object): 'key': self.key, 'tgt_type': tgt_type, 'ret': ret, + 'timeout': timeout, 'jid': jid} # if kwargs are passed, pack them. diff --git a/salt/master.py b/salt/master.py index f08c126280..0e4bba0505 100644 --- a/salt/master.py +++ b/salt/master.py @@ -2043,7 +2043,6 @@ class ClearFuncs(object): def publish_batch(self, clear_load, minions, missing): batch_load = {} batch_load.update(clear_load) - import salt.cli.batch_async batch = salt.cli.batch_async.BatchAsync( self.local.opts, functools.partial(self._prep_jid, clear_load, {}), -- 2.23.0 ++++++ fix-schedule.run_job-port-upstream-pr-54799-194.patch ++++++
From d39d0014cc50d7a9a5a24ec5e8ec2bd04609dbdd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Mihai=20Dinc=C4=83?= <dincamihai@users.noreply.github.com> Date: Tue, 17 Dec 2019 12:19:45 +0100 Subject: [PATCH] Fix schedule.run_job - Port upstream PR#54799 (#194)
If a scheduled job does not contains a time element parameter then running that job with schedule.run_job fails with a traceback because data['run'] does not exist. Fixing lint. Fixing test_run_job test to ensure the right data is being asserted. Updating unit/test_module_names.py to include integration.scheduler.test_run_job. Removing extra, unnecessary code. --- salt/utils/schedule.py | 11 ++-- tests/integration/scheduler/test_run_job.py | 73 +++++++++++++++++++++ tests/unit/modules/test_schedule.py | 8 +-- tests/unit/test_module_names.py | 1 + 4 files changed, 85 insertions(+), 8 deletions(-) create mode 100644 tests/integration/scheduler/test_run_job.py diff --git a/salt/utils/schedule.py b/salt/utils/schedule.py index 6d1a8311e5..3f7f26141e 100644 --- a/salt/utils/schedule.py +++ b/salt/utils/schedule.py @@ -210,7 +210,7 @@ class Schedule(object): # dict we treat it like it was there and is True # Check if we're able to run - if not data['run']: + if 'run' not in data or not data['run']: return data if 'jid_include' not in data or data['jid_include']: jobcount = 0 @@ -459,7 +459,10 @@ class Schedule(object): if 'name' not in data: data['name'] = name - log.info('Running Job: %s', name) + + # Assume run should be True until we check max_running + if 'run' not in data: + data['run'] = True if not self.standalone: data = self._check_max_running(func, @@ -468,8 +471,8 @@ class Schedule(object): datetime.datetime.now()) # Grab run, assume True - run = data.get('run', True) - if run: + if data.get('run'): + log.info('Running Job: %s', name) self._run_job(func, data) def enable_schedule(self): diff --git a/tests/integration/scheduler/test_run_job.py b/tests/integration/scheduler/test_run_job.py new file mode 100644 index 0000000000..c8cdcb6b24 --- /dev/null +++ b/tests/integration/scheduler/test_run_job.py @@ -0,0 +1,73 @@ +# -*- coding: utf-8 -*- + +# Import Python libs +from __future__ import absolute_import +import copy +import logging +import os + +# Import Salt Testing libs +from tests.support.case import ModuleCase +from tests.support.mixins import SaltReturnAssertsMixin + +# Import Salt Testing Libs +from tests.support.mock import MagicMock, patch +import tests.integration as integration + +# Import Salt libs +import salt.utils.schedule +import salt.utils.platform + +from salt.modules.test import ping as ping + +try: + import croniter # pylint: disable=W0611 + HAS_CRONITER = True +except ImportError: + HAS_CRONITER = False + +log = logging.getLogger(__name__) +ROOT_DIR = os.path.join(integration.TMP, 'schedule-unit-tests') +SOCK_DIR = os.path.join(ROOT_DIR, 'test-socks') + +DEFAULT_CONFIG = salt.config.minion_config(None) +DEFAULT_CONFIG['conf_dir'] = ROOT_DIR +DEFAULT_CONFIG['root_dir'] = ROOT_DIR +DEFAULT_CONFIG['sock_dir'] = SOCK_DIR +DEFAULT_CONFIG['pki_dir'] = os.path.join(ROOT_DIR, 'pki') +DEFAULT_CONFIG['cachedir'] = os.path.join(ROOT_DIR, 'cache') + + +class SchedulerRunJobTest(ModuleCase, SaltReturnAssertsMixin): + ''' + Validate the pkg module + ''' + def setUp(self): + with patch('salt.utils.schedule.clean_proc_dir', MagicMock(return_value=None)): + functions = {'test.ping': ping} + self.schedule = salt.utils.schedule.Schedule(copy.deepcopy(DEFAULT_CONFIG), functions, returners={}) + self.schedule.opts['loop_interval'] = 1 + + def tearDown(self): + self.schedule.reset() + + def test_run_job(self): + ''' + verify that scheduled job runs + ''' + job_name = 'test_run_job' + job = { + 'schedule': { + job_name: { + 'function': 'test.ping', + } + } + } + # Add the job to the scheduler + self.schedule.opts.update(job) + + # Run job + self.schedule.run_job(job_name) + ret = self.schedule.job_status(job_name) + expected = {'function': 'test.ping', 'run': True, 'name': 'test_run_job'} + self.assertEqual(ret, expected) diff --git a/tests/unit/modules/test_schedule.py b/tests/unit/modules/test_schedule.py index f3e68f4b28..9fb01e94ec 100644 --- a/tests/unit/modules/test_schedule.py +++ b/tests/unit/modules/test_schedule.py @@ -150,14 +150,14 @@ class ScheduleTestCase(TestCase, LoaderModuleMockMixin): ''' Test if it run a scheduled job on the minion immediately. ''' - with patch.dict(schedule.__opts__, {'schedule': {}, 'sock_dir': SOCK_DIR}): + with patch.dict(schedule.__opts__, {'schedule': {'job1': JOB1}, 'sock_dir': SOCK_DIR}): mock = MagicMock(return_value=True) with patch.dict(schedule.__salt__, {'event.fire': mock}): - _ret_value = {'complete': True, 'schedule': {}} + _ret_value = {'complete': True, 'schedule': {'job1': JOB1}} with patch.object(SaltEvent, 'get_event', return_value=_ret_value): self.assertDictEqual(schedule.run_job('job1'), - {'comment': 'Job job1 does not exist.', - 'result': False}) + {'comment': 'Scheduling Job job1 on minion.', + 'result': True}) # 'enable_job' function tests: 1 diff --git a/tests/unit/test_module_names.py b/tests/unit/test_module_names.py index 1efcb5869e..c4109c0916 100644 --- a/tests/unit/test_module_names.py +++ b/tests/unit/test_module_names.py @@ -152,6 +152,7 @@ class BadTestModuleNamesTestCase(TestCase): 'integration.scheduler.test_skip', 'integration.scheduler.test_maxrunning', 'integration.scheduler.test_helpers', + 'integration.scheduler.test_run_job', 'integration.shell.test_spm', 'integration.shell.test_cp', 'integration.shell.test_syndic', -- 2.23.0 ++++++ fix-syndic-start-issue.patch ++++++
From 0b15fe1ecc3ed468714a5a8d84787ab23ac6144e Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Thu, 2 May 2019 10:50:17 +0200 Subject: [PATCH] Fix syndic start issue
--- salt/utils/event.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/utils/event.py b/salt/utils/event.py index d2700bd2a0..160cba9bde 100644 --- a/salt/utils/event.py +++ b/salt/utils/event.py @@ -879,7 +879,7 @@ class SaltEvent(object): self.subscriber.callbacks.add(event_handler) if not self.subscriber.reading: # This will handle reconnects - self.subscriber.read_async() + return self.subscriber.read_async() def __del__(self): # skip exceptions in destroy-- since destroy() doesn't cover interpreter -- 2.20.1 ++++++ fix-unit-test-for-grains-core.patch ++++++
From 7ffa39cd80393f2a3ed5cd75793b134b9d939cf9 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Thu, 11 Oct 2018 16:20:40 +0200 Subject: [PATCH] Fix unit test for grains core
--- tests/unit/grains/test_core.py | 11 +++++------ 1 file changed, 5 insertions(+), 6 deletions(-) diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py index 2ab32ef41b..4923ee00b0 100644 --- a/tests/unit/grains/test_core.py +++ b/tests/unit/grains/test_core.py @@ -62,11 +62,10 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin): def test_parse_etc_os_release(self, path_isfile_mock): path_isfile_mock.side_effect = lambda x: x == "/usr/lib/os-release" with salt.utils.files.fopen(os.path.join(OS_RELEASE_DIR, "ubuntu-17.10")) as os_release_file: - os_release_content = os_release_file.read() - with patch("salt.utils.files.fopen", mock_open(read_data=os_release_content)): - os_release = core._parse_os_release( - '/etc/os-release', - '/usr/lib/os-release') + os_release_content = os_release_file.readlines() + with patch("salt.utils.files.fopen", mock_open()) as os_release_file: + os_release_file.return_value.__iter__.return_value = os_release_content + os_release = core._parse_os_release(["/etc/os-release", "/usr/lib/os-release"]) self.assertEqual(os_release, { "NAME": "Ubuntu", "VERSION": "17.10 (Artful Aardvark)", @@ -128,7 +127,7 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin): def test_missing_os_release(self): with patch('salt.utils.files.fopen', mock_open(read_data={})): - os_release = core._parse_os_release('/etc/os-release', '/usr/lib/os-release') + os_release = core._parse_os_release(['/etc/os-release', '/usr/lib/os-release']) self.assertEqual(os_release, {}) @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') -- 2.19.0 ++++++ fix-unit-tests-for-batch-async-after-refactor.patch ++++++
From a38adfa2efe40c2b1508b685af0b5d28a6bbcfc8 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Wed, 4 Mar 2020 10:13:43 +0000 Subject: [PATCH] Fix unit tests for batch async after refactor
--- tests/unit/cli/test_batch_async.py | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/tests/unit/cli/test_batch_async.py b/tests/unit/cli/test_batch_async.py index f1d36a81fb..e1ce60859b 100644 --- a/tests/unit/cli/test_batch_async.py +++ b/tests/unit/cli/test_batch_async.py @@ -126,9 +126,10 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): self.batch.timedout_minions = {'bar'} self.batch.event = MagicMock() self.batch.metadata = {'mykey': 'myvalue'} + old_event = self.batch.event self.batch.end_batch() self.assertEqual( - self.batch.event.fire_event.call_args[0], + old_event.fire_event.call_args[0], ( { 'available_minions': set(['foo', 'bar']), @@ -146,6 +147,21 @@ class AsyncBatchTestCase(AsyncTestCase, TestCase): event = MagicMock() batch.event = event batch.__del__() + self.assertEqual(batch.local, None) + self.assertEqual(batch.event, None) + self.assertEqual(batch.ioloop, None) + + def test_batch_close_safe(self): + batch = BatchAsync(MagicMock(), MagicMock(), MagicMock()) + event = MagicMock() + batch.event = event + batch.patterns = { ('salt/job/1234/ret/*', 'find_job_return'), ('salt/job/4321/ret/*', 'find_job_return') } + batch.close_safe() + self.assertEqual(batch.local, None) + self.assertEqual(batch.event, None) + self.assertEqual(batch.ioloop, None) + self.assertEqual( + len(event.unsubscribe.mock_calls), 2) self.assertEqual( len(event.remove_event_handler.mock_calls), 1) -- 2.23.0 ++++++ fix-virt-states-to-not-fail-on-vms-already-stopped.-.patch ++++++
From de0b7d8eaf50813008533afc66f4ddef75f0456d Mon Sep 17 00:00:00 2001 From: Cedric Bosdonnat <cbosdonnat@suse.com> Date: Mon, 16 Dec 2019 11:27:49 +0100 Subject: [PATCH] Fix virt states to not fail on VMs already stopped. (#195)
The virt.stopped and virt.powered_off states need to do nothing and not fail if one of the targeted VMs is already in shutdown state. --- salt/states/virt.py | 45 ++++++++++++++++++++-------------- tests/unit/states/test_virt.py | 36 +++++++++++++++++++++++++++ 2 files changed, 63 insertions(+), 18 deletions(-) diff --git a/salt/states/virt.py b/salt/states/virt.py index 32a9e31ae5..68e9ac6fb6 100644 --- a/salt/states/virt.py +++ b/salt/states/virt.py @@ -145,35 +145,45 @@ def keys(name, basepath='/etc/pki', **kwargs): return ret -def _virt_call(domain, function, section, comment, +def _virt_call(domain, function, section, comment, state=None, connection=None, username=None, password=None, **kwargs): ''' Helper to call the virt functions. Wildcards supported. - :param domain: - :param function: - :param section: - :param comment: - :return: + :param domain: the domain to apply the function on. Can contain wildcards. + :param function: virt function to call + :param section: key for the changed domains in the return changes dictionary + :param comment: comment to return + :param state: the expected final state of the VM. If None the VM state won't be checked. + :return: the salt state return ''' ret = {'name': domain, 'changes': {}, 'result': True, 'comment': ''} targeted_domains = fnmatch.filter(__salt__['virt.list_domains'](), domain) changed_domains = list() ignored_domains = list() + noaction_domains = list() for targeted_domain in targeted_domains: try: - response = __salt__['virt.{0}'.format(function)](targeted_domain, - connection=connection, - username=username, - password=password, - **kwargs) - if isinstance(response, dict): - response = response['name'] - changed_domains.append({'domain': targeted_domain, function: response}) + action_needed = True + # If a state has been provided, use it to see if we have something to do + if state is not None: + domain_state = __salt__['virt.vm_state'](targeted_domain) + action_needed = domain_state.get(targeted_domain) != state + if action_needed: + response = __salt__['virt.{0}'.format(function)](targeted_domain, + connection=connection, + username=username, + password=password, + **kwargs) + if isinstance(response, dict): + response = response['name'] + changed_domains.append({'domain': targeted_domain, function: response}) + else: + noaction_domains.append(targeted_domain) except libvirt.libvirtError as err: ignored_domains.append({'domain': targeted_domain, 'issue': six.text_type(err)}) if not changed_domains: - ret['result'] = False + ret['result'] = not ignored_domains and bool(targeted_domains) ret['comment'] = 'No changes had happened' if ignored_domains: ret['changes'] = {'ignored': ignored_domains} @@ -206,7 +216,7 @@ def stopped(name, connection=None, username=None, password=None): virt.stopped ''' - return _virt_call(name, 'shutdown', 'stopped', "Machine has been shut down", + return _virt_call(name, 'shutdown', 'stopped', 'Machine has been shut down', state='shutdown', connection=connection, username=username, password=password) @@ -231,8 +241,7 @@ def powered_off(name, connection=None, username=None, password=None): domain_name: virt.stopped ''' - - return _virt_call(name, 'stop', 'unpowered', 'Machine has been powered off', + return _virt_call(name, 'stop', 'unpowered', 'Machine has been powered off', state='shutdown', connection=connection, username=username, password=password) diff --git a/tests/unit/states/test_virt.py b/tests/unit/states/test_virt.py index 2904fa224d..2af5caca1b 100644 --- a/tests/unit/states/test_virt.py +++ b/tests/unit/states/test_virt.py @@ -378,8 +378,11 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): 'result': True} shutdown_mock = MagicMock(return_value=True) + + # Normal case with patch.dict(virt.__salt__, { # pylint: disable=no-member 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.vm_state': MagicMock(return_value={'myvm': 'running'}), 'virt.shutdown': shutdown_mock }): ret.update({'changes': { @@ -389,8 +392,10 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): self.assertDictEqual(virt.stopped('myvm'), ret) shutdown_mock.assert_called_with('myvm', connection=None, username=None, password=None) + # Normal case with user-provided connection parameters with patch.dict(virt.__salt__, { # pylint: disable=no-member 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.vm_state': MagicMock(return_value={'myvm': 'running'}), 'virt.shutdown': shutdown_mock, }): self.assertDictEqual(virt.stopped('myvm', @@ -399,8 +404,10 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): password='secret'), ret) shutdown_mock.assert_called_with('myvm', connection='myconnection', username='user', password='secret') + # Case where an error occurred during the shutdown with patch.dict(virt.__salt__, { # pylint: disable=no-member 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.vm_state': MagicMock(return_value={'myvm': 'running'}), 'virt.shutdown': MagicMock(side_effect=self.mock_libvirt.libvirtError('Some error')) }): ret.update({'changes': {'ignored': [{'domain': 'myvm', 'issue': 'Some error'}]}, @@ -408,10 +415,21 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): 'comment': 'No changes had happened'}) self.assertDictEqual(virt.stopped('myvm'), ret) + # Case there the domain doesn't exist with patch.dict(virt.__salt__, {'virt.list_domains': MagicMock(return_value=[])}): # pylint: disable=no-member ret.update({'changes': {}, 'result': False, 'comment': 'No changes had happened'}) self.assertDictEqual(virt.stopped('myvm'), ret) + # Case where the domain is already stopped + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.vm_state': MagicMock(return_value={'myvm': 'shutdown'}) + }): + ret.update({'changes': {}, + 'result': True, + 'comment': 'No changes had happened'}) + self.assertDictEqual(virt.stopped('myvm'), ret) + def test_powered_off(self): ''' powered_off state test cases. @@ -421,8 +439,11 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): 'result': True} stop_mock = MagicMock(return_value=True) + + # Normal case with patch.dict(virt.__salt__, { # pylint: disable=no-member 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.vm_state': MagicMock(return_value={'myvm': 'running'}), 'virt.stop': stop_mock }): ret.update({'changes': { @@ -432,8 +453,10 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): self.assertDictEqual(virt.powered_off('myvm'), ret) stop_mock.assert_called_with('myvm', connection=None, username=None, password=None) + # Normal case with user-provided connection parameters with patch.dict(virt.__salt__, { # pylint: disable=no-member 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.vm_state': MagicMock(return_value={'myvm': 'running'}), 'virt.stop': stop_mock, }): self.assertDictEqual(virt.powered_off('myvm', @@ -442,8 +465,10 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): password='secret'), ret) stop_mock.assert_called_with('myvm', connection='myconnection', username='user', password='secret') + # Case where an error occurred during the poweroff with patch.dict(virt.__salt__, { # pylint: disable=no-member 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.vm_state': MagicMock(return_value={'myvm': 'running'}), 'virt.stop': MagicMock(side_effect=self.mock_libvirt.libvirtError('Some error')) }): ret.update({'changes': {'ignored': [{'domain': 'myvm', 'issue': 'Some error'}]}, @@ -451,10 +476,21 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): 'comment': 'No changes had happened'}) self.assertDictEqual(virt.powered_off('myvm'), ret) + # Case there the domain doesn't exist with patch.dict(virt.__salt__, {'virt.list_domains': MagicMock(return_value=[])}): # pylint: disable=no-member ret.update({'changes': {}, 'result': False, 'comment': 'No changes had happened'}) self.assertDictEqual(virt.powered_off('myvm'), ret) + # Case where the domain is already stopped + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.vm_state': MagicMock(return_value={'myvm': 'shutdown'}) + }): + ret.update({'changes': {}, + 'result': True, + 'comment': 'No changes had happened'}) + self.assertDictEqual(virt.powered_off('myvm'), ret) + def test_snapshot(self): ''' snapshot state test cases. -- 2.23.0 ++++++ fix-virt.full_info-176.patch ++++++
From 4ce0bc544174fdb00482db4653fb4b0ef411e78b Mon Sep 17 00:00:00 2001 From: Cedric Bosdonnat <cbosdonnat@suse.com> Date: Tue, 3 Sep 2019 15:18:04 +0200 Subject: [PATCH] Fix virt.full_info (#176)
* virt.get_xml doesn't take a domain object In some places in the virt module, the get_xml function was called with a domain object, leading to runtime errors like the following one: 'ERROR: The VM "<libvirt.virDomain object at 0x7fad04208650>" is not present' * qemu-img info needs -U flag on running VMs When getting VM disks informations on a running VM, the following error occured: The minion function caused an exception: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/salt/minion.py", line 1673, in _thread_return return_data = minion_instance.executors[fname](opts, data, func, args, kwargs) File "/usr/lib/python3.6/site-packages/salt/executors/direct_call.py", line 12, in execute return func(*args, **kwargs) File "/usr/lib/python3.6/site-packages/salt/modules/virt.py", line 2411, in full_info 'vm_info': vm_info()} File "/usr/lib/python3.6/site-packages/salt/modules/virt.py", line 2020, in vm_info info[domain.name()] = _info(domain) File "/usr/lib/python3.6/site-packages/salt/modules/virt.py", line 2004, in _info 'disks': _get_disks(dom), File "/usr/lib/python3.6/site-packages/salt/modules/virt.py", line 465, in _get_disks output = _parse_qemu_img_info(qemu_output) File "/usr/lib/python3.6/site-packages/salt/modules/virt.py", line 287, in _parse_qemu_img_info raw_infos = salt.utils.json.loads(info) File "/usr/lib/python3.6/site-packages/salt/utils/json.py", line 92, in loads return json_module.loads(s, **kwargs) File "/usr/lib64/python3.6/json/__init__.py", line 354, in loads return _default_decoder.decode(s) File "/usr/lib64/python3.6/json/decoder.py", line 339, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib64/python3.6/json/decoder.py", line 357, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) This is due to the fact that qemu-img can't get infos on a disk that is already used like by a running VM. Using the qemu-img -U flag gets it running in all cases. --- salt/modules/virt.py | 10 +- tests/unit/modules/test_virt.py | 242 +++++++++++++++++--------------- 2 files changed, 132 insertions(+), 120 deletions(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index 96c17bd60b..d01b6c3f1e 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -331,7 +331,7 @@ def _get_uuid(dom): salt '*' virt.get_uuid <domain> ''' - return ElementTree.fromstring(get_xml(dom)).find('uuid').text + return ElementTree.fromstring(dom.XMLDesc(0)).find('uuid').text def _get_on_poweroff(dom): @@ -344,7 +344,7 @@ def _get_on_poweroff(dom): salt '*' virt.get_on_restart <domain> ''' - node = ElementTree.fromstring(get_xml(dom)).find('on_poweroff') + node = ElementTree.fromstring(dom.XMLDesc(0)).find('on_poweroff') return node.text if node is not None else '' @@ -358,7 +358,7 @@ def _get_on_reboot(dom): salt '*' virt.get_on_reboot <domain> ''' - node = ElementTree.fromstring(get_xml(dom)).find('on_reboot') + node = ElementTree.fromstring(dom.XMLDesc(0)).find('on_reboot') return node.text if node is not None else '' @@ -372,7 +372,7 @@ def _get_on_crash(dom): salt '*' virt.get_on_crash <domain> ''' - node = ElementTree.fromstring(get_xml(dom)).find('on_crash') + node = ElementTree.fromstring(dom.XMLDesc(0)).find('on_crash') return node.text if node is not None else '' @@ -458,7 +458,7 @@ def _get_disks(dom): if driver is not None and driver.get('type') == 'qcow2': try: stdout = subprocess.Popen( - ['qemu-img', 'info', '--output', 'json', '--backing-chain', disk['file']], + ['qemu-img', 'info', '-U', '--output', 'json', '--backing-chain', disk['file']], shell=False, stdout=subprocess.PIPE).communicate()[0] qemu_output = salt.utils.stringutils.to_str(stdout) diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index e644e62452..4d20e998d8 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -81,7 +81,9 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): mock_domain.XMLDesc.return_value = xml # pylint: disable=no-member # Return state as shutdown - mock_domain.info.return_value = [4, 0, 0, 0] # pylint: disable=no-member + mock_domain.info.return_value = [4, 2048 * 1024, 1024 * 1024, 2, 1234] # pylint: disable=no-member + mock_domain.ID.return_value = 1 + mock_domain.name.return_value = name return mock_domain def test_disk_profile_merge(self): @@ -1394,49 +1396,6 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): re.match('^([0-9A-F]{2}[:-]){5}([0-9A-F]{2})$', interface_attrs['mac'], re.I)) - def test_get_graphics(self): - ''' - Test virt.get_graphics() - ''' - xml = '''<domain type='kvm' id='7'> - <name>test-vm</name> - <devices> - <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'> - <listen type='address' address='0.0.0.0'/> - </graphics> - </devices> - </domain> - ''' - self.set_mock_vm("test-vm", xml) - - graphics = virt.get_graphics('test-vm') - self.assertEqual('vnc', graphics['type']) - self.assertEqual('5900', graphics['port']) - self.assertEqual('0.0.0.0', graphics['listen']) - - def test_get_nics(self): - ''' - Test virt.get_nics() - ''' - xml = '''<domain type='kvm' id='7'> - <name>test-vm</name> - <devices> - <interface type='bridge'> - <mac address='ac:de:48:b6:8b:59'/> - <source bridge='br0'/> - <model type='virtio'/> - <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> - </interface> - </devices> - </domain> - ''' - self.set_mock_vm("test-vm", xml) - - nics = virt.get_nics('test-vm') - nic = nics[list(nics)[0]] - self.assertEqual('bridge', nic['type']) - self.assertEqual('ac:de:48:b6:8b:59', nic['mac']) - def test_parse_qemu_img_info(self): ''' Make sure that qemu-img info output is properly parsed @@ -1558,77 +1517,6 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): ], }, virt._parse_qemu_img_info(qemu_infos)) - def test_get_disks(self): - ''' - Test virt.get_disks() - ''' - xml = '''<domain type='kvm' id='7'> - <name>test-vm</name> - <devices> - <disk type='file' device='disk'> - <driver name='qemu' type='qcow2'/> - <source file='/disks/test.qcow2'/> - <target dev='vda' bus='virtio'/> - </disk> - <disk type='file' device='cdrom'> - <driver name='qemu' type='raw'/> - <source file='/disks/test-cdrom.iso'/> - <target dev='hda' bus='ide'/> - <readonly/> - </disk> - </devices> - </domain> - ''' - self.set_mock_vm("test-vm", xml) - - qemu_infos = '''[{ - "virtual-size": 25769803776, - "filename": "/disks/test.qcow2", - "cluster-size": 65536, - "format": "qcow2", - "actual-size": 217088, - "format-specific": { - "type": "qcow2", - "data": { - "compat": "1.1", - "lazy-refcounts": false, - "refcount-bits": 16, - "corrupt": false - } - }, - "full-backing-filename": "/disks/mybacking.qcow2", - "backing-filename": "mybacking.qcow2", - "dirty-flag": false - }, - { - "virtual-size": 25769803776, - "filename": "/disks/mybacking.qcow2", - "cluster-size": 65536, - "format": "qcow2", - "actual-size": 393744384, - "format-specific": { - "type": "qcow2", - "data": { - "compat": "1.1", - "lazy-refcounts": false, - "refcount-bits": 16, - "corrupt": false - } - }, - "dirty-flag": false - }]''' - - self.mock_popen.communicate.return_value = [qemu_infos] # pylint: disable=no-member - disks = virt.get_disks('test-vm') - disk = disks.get('vda') - self.assertEqual('/disks/test.qcow2', disk['file']) - self.assertEqual('disk', disk['type']) - self.assertEqual('/disks/mybacking.qcow2', disk['backing file']['file']) - cdrom = disks.get('hda') - self.assertEqual('/disks/test-cdrom.iso', cdrom['file']) - self.assertEqual('cdrom', cdrom['type']) - self.assertFalse('backing file' in cdrom.keys()) - @patch('salt.modules.virt.stop', return_value=True) @patch('salt.modules.virt.undefine') @patch('os.remove') @@ -2994,3 +2882,127 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): virt.volume_delete('default', 'missing') virt.volume_delete('missing', 'test_volume') self.assertEqual(mock_delete.call_count, 2) + + def test_full_info(self): + ''' + Test virt.full_info + ''' + xml = '''<domain type='kvm' id='7'> + <uuid>28deee33-4859-4f23-891c-ee239cffec94</uuid> + <name>test-vm</name> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <disk type='file' device='disk'> + <driver name='qemu' type='qcow2'/> + <source file='/disks/test.qcow2'/> + <target dev='vda' bus='virtio'/> + </disk> + <disk type='file' device='cdrom'> + <driver name='qemu' type='raw'/> + <source file='/disks/test-cdrom.iso'/> + <target dev='hda' bus='ide'/> + <readonly/> + </disk> + <interface type='bridge'> + <mac address='ac:de:48:b6:8b:59'/> + <source bridge='br0'/> + <model type='virtio'/> + <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> + </interface> + <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0'> + <listen type='address' address='0.0.0.0'/> + </graphics> + </devices> + </domain> + ''' + self.set_mock_vm("test-vm", xml) + + qemu_infos = '''[{ + "virtual-size": 25769803776, + "filename": "/disks/test.qcow2", + "cluster-size": 65536, + "format": "qcow2", + "actual-size": 217088, + "format-specific": { + "type": "qcow2", + "data": { + "compat": "1.1", + "lazy-refcounts": false, + "refcount-bits": 16, + "corrupt": false + } + }, + "full-backing-filename": "/disks/mybacking.qcow2", + "backing-filename": "mybacking.qcow2", + "dirty-flag": false + }, + { + "virtual-size": 25769803776, + "filename": "/disks/mybacking.qcow2", + "cluster-size": 65536, + "format": "qcow2", + "actual-size": 393744384, + "format-specific": { + "type": "qcow2", + "data": { + "compat": "1.1", + "lazy-refcounts": false, + "refcount-bits": 16, + "corrupt": false + } + }, + "dirty-flag": false + }]''' + + self.mock_popen.communicate.return_value = [qemu_infos] # pylint: disable=no-member + + self.mock_conn.getInfo = MagicMock(return_value=['x86_64', 4096, 8, 2712, 1, 2, 4, 2]) + + actual = virt.full_info() + + # Test the hypervisor infos + self.assertEqual(2816, actual['freemem']) + self.assertEqual(6, actual['freecpu']) + self.assertEqual(4, actual['node_info']['cpucores']) + self.assertEqual(2712, actual['node_info']['cpumhz']) + self.assertEqual('x86_64', actual['node_info']['cpumodel']) + self.assertEqual(8, actual['node_info']['cpus']) + self.assertEqual(2, actual['node_info']['cputhreads']) + self.assertEqual(1, actual['node_info']['numanodes']) + self.assertEqual(4096, actual['node_info']['phymemory']) + self.assertEqual(2, actual['node_info']['sockets']) + + # Test the vm_info output: + self.assertEqual(2, actual['vm_info']['test-vm']['cpu']) + self.assertEqual(1234, actual['vm_info']['test-vm']['cputime']) + self.assertEqual(1024 * 1024, actual['vm_info']['test-vm']['mem']) + self.assertEqual(2048 * 1024, actual['vm_info']['test-vm']['maxMem']) + self.assertEqual('shutdown', actual['vm_info']['test-vm']['state']) + self.assertEqual('28deee33-4859-4f23-891c-ee239cffec94', actual['vm_info']['test-vm']['uuid']) + self.assertEqual('destroy', actual['vm_info']['test-vm']['on_crash']) + self.assertEqual('restart', actual['vm_info']['test-vm']['on_reboot']) + self.assertEqual('destroy', actual['vm_info']['test-vm']['on_poweroff']) + + # Test the nics + nic = actual['vm_info']['test-vm']['nics']['ac:de:48:b6:8b:59'] + self.assertEqual('bridge', nic['type']) + self.assertEqual('ac:de:48:b6:8b:59', nic['mac']) + + # Test the disks + disks = actual['vm_info']['test-vm']['disks'] + disk = disks.get('vda') + self.assertEqual('/disks/test.qcow2', disk['file']) + self.assertEqual('disk', disk['type']) + self.assertEqual('/disks/mybacking.qcow2', disk['backing file']['file']) + cdrom = disks.get('hda') + self.assertEqual('/disks/test-cdrom.iso', cdrom['file']) + self.assertEqual('cdrom', cdrom['type']) + self.assertFalse('backing file' in cdrom.keys()) + + # Test the graphics + graphics = actual['vm_info']['test-vm']['graphics'] + self.assertEqual('vnc', graphics['type']) + self.assertEqual('5900', graphics['port']) + self.assertEqual('0.0.0.0', graphics['listen']) -- 2.20.1 ++++++ fix-virt.get_hypervisor-188.patch ++++++
From ee95a135f11df05a644ab7d614742b03378bac45 Mon Sep 17 00:00:00 2001 From: Cedric Bosdonnat <cbosdonnat@suse.com> Date: Tue, 10 Dec 2019 10:27:26 +0100 Subject: [PATCH] Fix virt.get_hypervisor() (#188)
virt.get_hypervisor resulted in: AttributeError: module 'salt.loader.dev-srv.tf.local.int.module.virt' has no attribute '_is_{}_hyper' This was due to missplaced parenthese. --- salt/modules/virt.py | 2 +- tests/unit/modules/test_virt.py | 14 ++++++++++++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index 5e26964449..dedcf8cb6f 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -3309,7 +3309,7 @@ def get_hypervisor(): # To add a new 'foo' hypervisor, add the _is_foo_hyper function, # add 'foo' to the list below and add it to the docstring with a .. versionadded:: hypervisors = ['kvm', 'xen'] - result = [hyper for hyper in hypervisors if getattr(sys.modules[__name__], '_is_{}_hyper').format(hyper)()] + result = [hyper for hyper in hypervisors if getattr(sys.modules[__name__], '_is_{}_hyper'.format(hyper))()] return result[0] if result else None diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index d8efafc063..6f594a8ff3 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -3044,3 +3044,17 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): # Shouldn't be called with another parameter so far since those are not implemented # and thus throwing exceptions. mock_pool.delete.assert_called_once_with(self.mock_libvirt.VIR_STORAGE_POOL_DELETE_NORMAL) + + @patch('salt.modules.virt._is_kvm_hyper', return_value=True) + @patch('salt.modules.virt._is_xen_hyper', return_value=False) + def test_get_hypervisor(self, isxen_mock, iskvm_mock): + ''' + test the virt.get_hypervisor() function + ''' + self.assertEqual('kvm', virt.get_hypervisor()) + + iskvm_mock.return_value = False + self.assertIsNone(virt.get_hypervisor()) + + isxen_mock.return_value = True + self.assertEqual('xen', virt.get_hypervisor()) -- 2.23.0 ++++++ fix-zypper-pkg.list_pkgs-expectation-and-dpkg-mockin.patch ++++++
From 8c4066c668147b1180c56f39722d2ade78ffd41c Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Thu, 13 Jun 2019 17:48:55 +0200 Subject: [PATCH] Fix zypper pkg.list_pkgs expectation and dpkg mocking
--- tests/unit/modules/test_dpkg_lowpkg.py | 12 ++++++------ tests/unit/modules/test_zypperpkg.py | 2 +- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/tests/unit/modules/test_dpkg_lowpkg.py b/tests/unit/modules/test_dpkg_lowpkg.py index d16ce3cc1a..98557a1d10 100644 --- a/tests/unit/modules/test_dpkg_lowpkg.py +++ b/tests/unit/modules/test_dpkg_lowpkg.py @@ -127,9 +127,9 @@ class DpkgTestCase(TestCase, LoaderModuleMockMixin): with patch.dict(dpkg.__salt__, {'cmd.run_all': mock}): self.assertEqual(dpkg.file_dict('httpd'), 'Error: error') - @patch('salt.modules.dpkg._get_pkg_ds_avail', MagicMock(return_value=dselect_pkg)) - @patch('salt.modules.dpkg._get_pkg_info', MagicMock(return_value=pkgs_info)) - @patch('salt.modules.dpkg._get_pkg_license', MagicMock(return_value='BSD v3')) + @patch('salt.modules.dpkg_lowpkg._get_pkg_ds_avail', MagicMock(return_value=dselect_pkg)) + @patch('salt.modules.dpkg_lowpkg._get_pkg_info', MagicMock(return_value=pkgs_info)) + @patch('salt.modules.dpkg_lowpkg._get_pkg_license', MagicMock(return_value='BSD v3')) def test_info(self): ''' Test info @@ -154,9 +154,9 @@ class DpkgTestCase(TestCase, LoaderModuleMockMixin): assert pkg_data['maintainer'] == 'Simpsons Developers <simpsons-devel-discuss@lists.springfield.org>' assert pkg_data['license'] == 'BSD v3' - @patch('salt.modules.dpkg._get_pkg_ds_avail', MagicMock(return_value=dselect_pkg)) - @patch('salt.modules.dpkg._get_pkg_info', MagicMock(return_value=pkgs_info)) - @patch('salt.modules.dpkg._get_pkg_license', MagicMock(return_value='BSD v3')) + @patch('salt.modules.dpkg_lowpkg._get_pkg_ds_avail', MagicMock(return_value=dselect_pkg)) + @patch('salt.modules.dpkg_lowpkg._get_pkg_info', MagicMock(return_value=pkgs_info)) + @patch('salt.modules.dpkg_lowpkg._get_pkg_license', MagicMock(return_value='BSD v3')) def test_info_attr(self): ''' Test info with 'attr' parameter diff --git a/tests/unit/modules/test_zypperpkg.py b/tests/unit/modules/test_zypperpkg.py index 5c5091a570..a7063e47c6 100644 --- a/tests/unit/modules/test_zypperpkg.py +++ b/tests/unit/modules/test_zypperpkg.py @@ -659,7 +659,7 @@ Repository 'DUMMY' not found by its alias, number, or URI. 'install_date_time_t': 1503572639, 'epoch': None, }], - 'perseus-dummy.i586': [{ + 'perseus-dummy': [{ 'version': '1.1', 'release': '1.1', 'arch': 'i586', -- 2.21.0 ++++++ fix-zypper.list_pkgs-to-be-aligned-with-pkg-state.patch ++++++
From 3b5803d31a93d2f619246d48691f52f6c65d52ee Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Mon, 25 Jun 2018 13:06:40 +0100 Subject: [PATCH] Fix zypper.list_pkgs to be aligned with pkg state
Handle packages with multiple version properly with zypper Add unit test coverage for multiple version packages on Zypper Fix '_find_remove_targets' after aligning Zypper with pkg state --- salt/states/pkg.py | 21 --------------------- 1 file changed, 21 deletions(-) diff --git a/salt/states/pkg.py b/salt/states/pkg.py index 2034262b23..0aca1e0af8 100644 --- a/salt/states/pkg.py +++ b/salt/states/pkg.py @@ -455,16 +455,6 @@ def _find_remove_targets(name=None, if __grains__['os'] == 'FreeBSD' and origin: cver = [k for k, v in six.iteritems(cur_pkgs) if v['origin'] == pkgname] - elif __grains__['os_family'] == 'Suse': - # On SUSE systems. Zypper returns packages without "arch" in name - try: - namepart, archpart = pkgname.rsplit('.', 1) - except ValueError: - cver = cur_pkgs.get(pkgname, []) - else: - if archpart in salt.utils.pkg.rpm.ARCHES + ("noarch",): - pkgname = namepart - cver = cur_pkgs.get(pkgname, []) else: cver = cur_pkgs.get(pkgname, []) @@ -871,17 +861,6 @@ def _verify_install(desired, new_pkgs, ignore_epoch=False, new_caps=None): cver = new_pkgs.get(pkgname.split('%')[0]) elif __grains__['os_family'] == 'Debian': cver = new_pkgs.get(pkgname.split('=')[0]) - elif __grains__['os_family'] == 'Suse': - # On SUSE systems. Zypper returns packages without "arch" in name - try: - namepart, archpart = pkgname.rsplit('.', 1) - except ValueError: - cver = new_pkgs.get(pkgname) - else: - if archpart in salt.utils.pkg.rpm.ARCHES + ("noarch",): - cver = new_pkgs.get(namepart) - else: - cver = new_pkgs.get(pkgname) else: cver = new_pkgs.get(pkgname) if not cver and pkgname in new_caps: -- 2.17.1 ++++++ fixes-cve-2018-15750-cve-2018-15751.patch ++++++
From b10ca8ee857e14915ac83a8614521495b42b5d2b Mon Sep 17 00:00:00 2001 From: Erik Johnson <palehose@gmail.com> Date: Fri, 24 Aug 2018 10:35:55 -0500 Subject: [PATCH] Fixes: CVE-2018-15750, CVE-2018-15751
Ensure that tokens are hex to avoid hanging/errors in cherrypy Add empty token salt-api integration tests Handle Auth exceptions in run_job Update tornado test to correct authentication message --- salt/netapi/rest_cherrypy/app.py | 7 ------- tests/integration/netapi/rest_tornado/test_app.py | 4 ++-- 2 files changed, 2 insertions(+), 9 deletions(-) diff --git a/salt/netapi/rest_cherrypy/app.py b/salt/netapi/rest_cherrypy/app.py index 40ee976b25..f9ca908115 100644 --- a/salt/netapi/rest_cherrypy/app.py +++ b/salt/netapi/rest_cherrypy/app.py @@ -1174,13 +1174,6 @@ class LowDataAdapter(object): except (TypeError, ValueError): raise cherrypy.HTTPError(401, 'Invalid token') - if 'token' in chunk: - # Make sure that auth token is hex - try: - int(chunk['token'], 16) - except (TypeError, ValueError): - raise cherrypy.HTTPError(401, 'Invalid token') - if client: chunk['client'] = client diff --git a/tests/integration/netapi/rest_tornado/test_app.py b/tests/integration/netapi/rest_tornado/test_app.py index a6829bdd4f..da96012b41 100644 --- a/tests/integration/netapi/rest_tornado/test_app.py +++ b/tests/integration/netapi/rest_tornado/test_app.py @@ -240,8 +240,8 @@ class TestSaltAPIHandler(_SaltnadoIntegrationTestCase): self.assertIn('jid', ret[0]) # the first 2 are regular returns self.assertIn('jid', ret[1]) self.assertIn('Failed to authenticate', ret[2]) # bad auth - self.assertEqual(ret[0]['minions'], sorted(['minion', 'sub_minion'])) - self.assertEqual(ret[1]['minions'], sorted(['minion', 'sub_minion'])) + self.assertEqual(ret[0]['minions'], sorted(['minion', 'sub_minion', 'localhost'])) + self.assertEqual(ret[1]['minions'], sorted(['minion', 'sub_minion', 'localhost'])) def test_simple_local_async_post_no_tgt(self): low = [{'client': 'local_async', -- 2.17.1 ++++++ fixing-streamclosed-issue.patch ++++++
From 5fc76e5384561070a5a6ccdfb096aca16201d04a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Mihai=20Dinc=C4=83?= <dincamihai@users.noreply.github.com> Date: Tue, 26 Nov 2019 18:26:31 +0100 Subject: [PATCH] Fixing StreamClosed issue
--- salt/cli/batch_async.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py index 754c257b36..c4545e3ebc 100644 --- a/salt/cli/batch_async.py +++ b/salt/cli/batch_async.py @@ -221,7 +221,6 @@ class BatchAsync(object): "metadata": self.metadata } self.event.fire_event(data, "salt/batch/{0}/done".format(self.batch_jid)) - self.event.remove_event_handler(self.__event_handler) for (pattern, label) in self.patterns: if label in ["ping_return", "batch_run"]: self.event.unsubscribe(pattern, match_type='glob') @@ -265,6 +264,7 @@ class BatchAsync(object): def __del__(self): self.local = None + self.event.remove_event_handler(self.__event_handler) self.event = None self.ioloop = None gc.collect() -- 2.23.0 ++++++ get-os_arch-also-without-rpm-package-installed.patch ++++++
From 11c9eacc439697e9fa7b30918963e4736333ed36 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Wed, 14 Nov 2018 17:36:23 +0100 Subject: [PATCH] Get os_arch also without RPM package installed
backport pkg.rpm test Add pkg.rpm unit test case Fix docstring Add UT for getting OS architecture fallback, when no RPM found (initrd, e.g.) Add UT for OS architecture detection on fallback, when no CPU arch can be determined Add UT for OS arch detection when no CPU arch or machine can be determined Remove unsupported testcase --- salt/utils/pkg/rpm.py | 18 ++++-- tests/unit/utils/test_pkg.py | 105 ++++++++++++++++++++++------------- 2 files changed, 77 insertions(+), 46 deletions(-) diff --git a/salt/utils/pkg/rpm.py b/salt/utils/pkg/rpm.py index 94e231da4b..bb8c3fb589 100644 --- a/salt/utils/pkg/rpm.py +++ b/salt/utils/pkg/rpm.py @@ -9,7 +9,9 @@ import collections import datetime import logging import subprocess +import platform import salt.utils.stringutils +import salt.utils.path # Import 3rd-party libs from salt.ext import six @@ -42,12 +44,16 @@ def get_osarch(): ''' Get the os architecture using rpm --eval ''' - ret = subprocess.Popen( - 'rpm --eval "%{_host_cpu}"', - shell=True, - close_fds=True, - stdout=subprocess.PIPE, - stderr=subprocess.PIPE).communicate()[0] + if salt.utils.path.which('rpm'): + ret = subprocess.Popen( + 'rpm --eval "%{_host_cpu}"', + shell=True, + close_fds=True, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE).communicate()[0] + else: + ret = ''.join(list(filter(None, platform.uname()[-2:]))[-1:]) + return salt.utils.stringutils.to_str(ret).strip() or 'unknown' diff --git a/tests/unit/utils/test_pkg.py b/tests/unit/utils/test_pkg.py index c293852058..361e0bf92f 100644 --- a/tests/unit/utils/test_pkg.py +++ b/tests/unit/utils/test_pkg.py @@ -1,47 +1,72 @@ # -*- coding: utf-8 -*- -# Import Python libs -from __future__ import absolute_import -# Import Salt Libs +from __future__ import absolute_import, unicode_literals, print_function + +from tests.support.unit import TestCase, skipIf +from tests.support.mock import Mock, MagicMock, patch, NO_MOCK, NO_MOCK_REASON import salt.utils.pkg -# Import Salt Testing Libs -from tests.support.unit import TestCase +from salt.utils.pkg import rpm + +try: + import pytest +except ImportError: + pytest = None -class PkgUtilsTestCase(TestCase): +@skipIf(NO_MOCK, NO_MOCK_REASON) +@skipIf(pytest is None, 'PyTest is missing') +class PkgRPMTestCase(TestCase): ''' - TestCase for salt.utils.pkg module + Test case for pkg.rpm utils ''' - test_parameters = [ - ("16.0.0.49153-0+f1", "", "16.0.0.49153-0+f1"), - ("> 15.0.0", ">", "15.0.0"), - ("< 15.0.0", "<", "15.0.0"), - ("<< 15.0.0", "<<", "15.0.0"), - (">> 15.0.0", ">>", "15.0.0"), - (">= 15.0.0", ">=", "15.0.0"), - ("<= 15.0.0", "<=", "15.0.0"), - ("!= 15.0.0", "!=", "15.0.0"), - ("<=> 15.0.0", "<=>", "15.0.0"), - ("<> 15.0.0", "<>", "15.0.0"), - ("= 15.0.0", "=", "15.0.0"), - (">15.0.0", ">", "15.0.0"), - ("<15.0.0", "<", "15.0.0"), - ("<<15.0.0", "<<", "15.0.0"), - (">>15.0.0", ">>", "15.0.0"), - (">=15.0.0", ">=", "15.0.0"), - ("<=15.0.0", "<=", "15.0.0"), - ("!=15.0.0", "!=", "15.0.0"), - ("<=>15.0.0", "<=>", "15.0.0"), - ("<>15.0.0", "<>", "15.0.0"), - ("=15.0.0", "=", "15.0.0"), - ("", "", "") - ] - - def test_split_comparison(self): - ''' - Tests salt.utils.pkg.split_comparison - ''' - for test_parameter in self.test_parameters: - oper, verstr = salt.utils.pkg.split_comparison(test_parameter[0]) - self.assertEqual(test_parameter[1], oper) - self.assertEqual(test_parameter[2], verstr) + + @patch('salt.utils.path.which', MagicMock(return_value=True)) + def test_get_osarch_by_rpm(self): + ''' + Get os_arch if RPM package is installed. + :return: + ''' + subprocess_mock = MagicMock() + subprocess_mock.Popen = MagicMock() + subprocess_mock.Popen().communicate = MagicMock(return_value=['Z80']) + with patch('salt.utils.pkg.rpm.subprocess', subprocess_mock): + assert rpm.get_osarch() == 'Z80' + assert subprocess_mock.Popen.call_count == 2 # One within the mock + assert subprocess_mock.Popen.call_args[1]['close_fds'] + assert subprocess_mock.Popen.call_args[1]['shell'] + assert len(subprocess_mock.Popen.call_args_list) == 2 + assert subprocess_mock.Popen.call_args[0][0] == 'rpm --eval "%{_host_cpu}"' + + @patch('salt.utils.path.which', MagicMock(return_value=False)) + @patch('salt.utils.pkg.rpm.subprocess', MagicMock(return_value=False)) + @patch('salt.utils.pkg.rpm.platform.uname', MagicMock( + return_value=('Sinclair BASIC', 'motophone', '1982 Sinclair Research Ltd', '1.0', 'ZX81', 'Z80'))) + def test_get_osarch_by_platform(self): + ''' + Get os_arch if RPM package is not installed (inird image, for example). + :return: + ''' + assert rpm.get_osarch() == 'Z80' + + @patch('salt.utils.path.which', MagicMock(return_value=False)) + @patch('salt.utils.pkg.rpm.subprocess', MagicMock(return_value=False)) + @patch('salt.utils.pkg.rpm.platform.uname', MagicMock( + return_value=('Sinclair BASIC', 'motophone', '1982 Sinclair Research Ltd', '1.0', 'ZX81', ''))) + def test_get_osarch_by_platform_no_cpu_arch(self): + ''' + Get os_arch if RPM package is not installed (inird image, for example) but cpu arch cannot be determined. + :return: + ''' + assert rpm.get_osarch() == 'ZX81' + + @patch('salt.utils.path.which', MagicMock(return_value=False)) + @patch('salt.utils.pkg.rpm.subprocess', MagicMock(return_value=False)) + @patch('salt.utils.pkg.rpm.platform.uname', MagicMock( + return_value=('Sinclair BASIC', 'motophone', '1982 Sinclair Research Ltd', '1.0', '', ''))) + def test_get_osarch_by_platform_no_cpu_arch_no_machine(self): + ''' + Get os_arch if RPM package is not installed (inird image, for example) + where both cpu arch and machine cannot be determined. + :return: + ''' + assert rpm.get_osarch() == 'unknown' -- 2.17.1 ++++++ implement-network.fqdns-module-function-bsc-1134860-.patch ++++++
From 76d0ec5ec0764f6c5e71ddc2dc03bd12c25045a0 Mon Sep 17 00:00:00 2001 From: EricS <54029547+ESiebigteroth@users.noreply.github.com> Date: Tue, 3 Sep 2019 11:22:53 +0200 Subject: [PATCH] Implement network.fqdns module function (bsc#1134860) (#172)
* Duplicate fqdns logic in module.network * Move _get_interfaces to utils.network * Reuse network.fqdns in grains.core.fqdns * Return empty list when fqdns grains is disabled Co-authored-by: Eric Siebigteroth <eric.siebigteroth@suse.de> --- salt/grains/core.py | 66 +++++----------------------------- salt/modules/network.py | 60 +++++++++++++++++++++++++++++++ salt/utils/network.py | 12 +++++++ tests/unit/grains/test_core.py | 64 ++++++++++++++++++++++++++------- 4 files changed, 131 insertions(+), 71 deletions(-) diff --git a/salt/grains/core.py b/salt/grains/core.py index e54212edfb..fa188a6ff7 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -25,8 +25,9 @@ import zlib from errno import EACCES, EPERM import datetime import warnings +import salt.modules.network -from multiprocessing.pool import ThreadPool +from salt.utils.network import _get_interfaces # pylint: disable=import-error try: @@ -83,6 +84,7 @@ __salt__ = { 'cmd.run_all': salt.modules.cmdmod._run_all_quiet, 'smbios.records': salt.modules.smbios.records, 'smbios.get': salt.modules.smbios.get, + 'network.fqdns': salt.modules.network.fqdns, } log = logging.getLogger(__name__) @@ -106,7 +108,6 @@ HAS_UNAME = True if not hasattr(os, 'uname'): HAS_UNAME = False -_INTERFACES = {} # Possible value for h_errno defined in netdb.h HOST_NOT_FOUND = 1 @@ -1506,17 +1507,6 @@ def _linux_bin_exists(binary): return False -def _get_interfaces(): - ''' - Provide a dict of the connected interfaces and their ip addresses - ''' - - global _INTERFACES - if not _INTERFACES: - _INTERFACES = salt.utils.network.interfaces() - return _INTERFACES - - def _parse_lsb_release(): ret = {} try: @@ -2200,52 +2190,12 @@ def fqdns(): ''' Return all known FQDNs for the system by enumerating all interfaces and then trying to reverse resolve them (excluding 'lo' interface). + To disable the fqdns grain, set enable_fqdns_grains: False in the minion configuration file. ''' - # Provides: - # fqdns - - grains = {} - fqdns = set() - - def _lookup_fqdn(ip): - try: - name, aliaslist, addresslist = socket.gethostbyaddr(ip) - return [socket.getfqdn(name)] + [als for als in aliaslist if salt.utils.network.is_fqdn(als)] - except socket.herror as err: - if err.errno in (0, HOST_NOT_FOUND, NO_DATA): - # No FQDN for this IP address, so we don't need to know this all the time. - log.debug("Unable to resolve address %s: %s", ip, err) - else: - log.error(err_message, err) - except (socket.error, socket.gaierror, socket.timeout) as err: - log.error(err_message, err) - - start = time.time() - - addresses = salt.utils.network.ip_addrs(include_loopback=False, interface_data=_get_interfaces()) - addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, interface_data=_get_interfaces())) - err_message = 'Exception during resolving address: %s' - - # Create a ThreadPool to process the underlying calls to 'socket.gethostbyaddr' in parallel. - # This avoid blocking the execution when the "fqdn" is not defined for certains IP addresses, which was causing - # that "socket.timeout" was reached multiple times secuencially, blocking execution for several seconds. - - try: - pool = ThreadPool(8) - results = pool.map(_lookup_fqdn, addresses) - pool.close() - pool.join() - except Exception as exc: - log.error("Exception while creating a ThreadPool for resolving FQDNs: %s", exc) - - for item in results: - if item: - fqdns.update(item) - - elapsed = time.time() - start - log.debug('Elapsed time getting FQDNs: {} seconds'.format(elapsed)) - - return {"fqdns": sorted(list(fqdns))} + opt = {"fqdns": []} + if __opts__.get('enable_fqdns_grains', True) == True: + opt = __salt__['network.fqdns']() + return opt def ip_fqdn(): diff --git a/salt/modules/network.py b/salt/modules/network.py index 28bcff1622..5b6ac930ea 100644 --- a/salt/modules/network.py +++ b/salt/modules/network.py @@ -11,6 +11,10 @@ import logging import re import os import socket +import time + +from multiprocessing.pool import ThreadPool + # Import salt libs import salt.utils.decorators.path @@ -1881,3 +1885,59 @@ def iphexval(ip): a = ip.split('.') hexval = ['%02X' % int(x) for x in a] # pylint: disable=E1321 return ''.join(hexval) + + +def fqdns(): + ''' + Return all known FQDNs for the system by enumerating all interfaces and + then trying to reverse resolve them (excluding 'lo' interface). + ''' + # Provides: + # fqdns + + # Possible value for h_errno defined in netdb.h + HOST_NOT_FOUND = 1 + NO_DATA = 4 + + grains = {} + fqdns = set() + + def _lookup_fqdn(ip): + try: + name, aliaslist, addresslist = socket.gethostbyaddr(ip) + return [socket.getfqdn(name)] + [als for als in aliaslist if salt.utils.network.is_fqdn(als)] + except socket.herror as err: + if err.errno in (0, HOST_NOT_FOUND, NO_DATA): + # No FQDN for this IP address, so we don't need to know this all the time. + log.debug("Unable to resolve address %s: %s", ip, err) + else: + log.error(err_message, err) + except (socket.error, socket.gaierror, socket.timeout) as err: + log.error(err_message, err) + + start = time.time() + + addresses = salt.utils.network.ip_addrs(include_loopback=False, interface_data=salt.utils.network._get_interfaces()) + addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, interface_data=salt.utils.network._get_interfaces())) + err_message = 'Exception during resolving address: %s' + + # Create a ThreadPool to process the underlying calls to 'socket.gethostbyaddr' in parallel. + # This avoid blocking the execution when the "fqdn" is not defined for certains IP addresses, which was causing + # that "socket.timeout" was reached multiple times secuencially, blocking execution for several seconds. + + try: + pool = ThreadPool(8) + results = pool.map(_lookup_fqdn, addresses) + pool.close() + pool.join() + except Exception as exc: + log.error("Exception while creating a ThreadPool for resolving FQDNs: %s", exc) + + for item in results: + if item: + fqdns.update(item) + + elapsed = time.time() - start + log.debug('Elapsed time getting FQDNs: {} seconds'.format(elapsed)) + + return {"fqdns": sorted(list(fqdns))} \ No newline at end of file diff --git a/salt/utils/network.py b/salt/utils/network.py index 3f0522b9a5..942adf1ca4 100644 --- a/salt/utils/network.py +++ b/salt/utils/network.py @@ -55,6 +55,18 @@ except (ImportError, OSError, AttributeError, TypeError): # pylint: disable=C0103 +_INTERFACES = {} +def _get_interfaces(): #! function + ''' + Provide a dict of the connected interfaces and their ip addresses + ''' + + global _INTERFACES + if not _INTERFACES: + _INTERFACES = interfaces() + return _INTERFACES + + def sanitize_host(host): ''' Sanitize host string. diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py index 5fa0ea06f1..889fb90074 100644 --- a/tests/unit/grains/test_core.py +++ b/tests/unit/grains/test_core.py @@ -33,6 +33,7 @@ import salt.utils.network import salt.utils.platform import salt.utils.path import salt.grains.core as core +import salt.modules.network # Import 3rd-party libs from salt.ext import six @@ -845,6 +846,40 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin): with patch.object(salt.utils.dns, 'parse_resolv', MagicMock(return_value=resolv_mock)): assert core.dns() == ret + + def test_enablefqdnsFalse(self): + ''' + tests enable_fqdns_grains is set to False + ''' + with patch.dict('salt.grains.core.__opts__', {'enable_fqdns_grains':False}): + assert core.fqdns() == {"fqdns": []} + + + def test_enablefqdnsTrue(self): + ''' + testing that grains uses network.fqdns module + ''' + with patch.dict('salt.grains.core.__salt__', {'network.fqdns': MagicMock(return_value="my.fake.domain")}): + with patch.dict('salt.grains.core.__opts__', {'enable_fqdns_grains':True}): + assert core.fqdns() == 'my.fake.domain' + + + def test_enablefqdnsNone(self): + ''' + testing default fqdns grains is returned when enable_fqdns_grains is None + ''' + with patch.dict('salt.grains.core.__opts__', {'enable_fqdns_grains':None}): + assert core.fqdns() == {"fqdns": []} + + + def test_enablefqdnswithoutpaching(self): + ''' + testing fqdns grains is enabled by default + ''' + with patch.dict('salt.grains.core.__salt__', {'network.fqdns': MagicMock(return_value="my.fake.domain")}): + assert core.fqdns() == 'my.fake.domain' + + @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') @patch.object(salt.utils, 'is_windows', MagicMock(return_value=False)) @patch('salt.utils.network.ip_addrs', MagicMock(return_value=['1.2.3.4', '5.6.7.8'])) @@ -861,11 +896,12 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin): ('foo.bar.baz', [], ['fe80::a8b2:93ff:fe00:0']), ('bluesniff.foo.bar', [], ['fe80::a8b2:93ff:dead:beef'])] ret = {'fqdns': ['bluesniff.foo.bar', 'foo.bar.baz', 'rinzler.evil-corp.com']} - with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock): - fqdns = core.fqdns() - assert "fqdns" in fqdns - assert len(fqdns['fqdns']) == len(ret['fqdns']) - assert set(fqdns['fqdns']) == set(ret['fqdns']) + with patch.dict(core.__salt__, {'network.fqdns': salt.modules.network.fqdns}): + with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock): + fqdns = core.fqdns() + assert "fqdns" in fqdns + assert len(fqdns['fqdns']) == len(ret['fqdns']) + assert set(fqdns['fqdns']) == set(ret['fqdns']) @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') @patch.object(salt.utils.platform, 'is_windows', MagicMock(return_value=False)) @@ -881,14 +917,16 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin): ('rinzler.evil-corp.com', ["false-hostname", "badaliass"], ['5.6.7.8']), ('foo.bar.baz', [], ['fe80::a8b2:93ff:fe00:0']), ('bluesniff.foo.bar', ["alias.bluesniff.foo.bar"], ['fe80::a8b2:93ff:dead:beef'])] - with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock): - fqdns = core.fqdns() - assert "fqdns" in fqdns - for alias in ["this.is.valid.alias", "alias.bluesniff.foo.bar"]: - assert alias in fqdns["fqdns"] - - for alias in ["throwmeaway", "false-hostname", "badaliass"]: - assert alias not in fqdns["fqdns"] + with patch.dict(core.__salt__, {'network.fqdns': salt.modules.network.fqdns}): + with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock): + fqdns = core.fqdns() + assert "fqdns" in fqdns + for alias in ["this.is.valid.alias", "alias.bluesniff.foo.bar"]: + assert alias in fqdns["fqdns"] + + for alias in ["throwmeaway", "false-hostname", "badaliass"]: + assert alias not in fqdns["fqdns"] + def test_core_virtual(self): ''' test virtual grain with cmd virt-what -- 2.22.0 ++++++ improve-batch_async-to-release-consumed-memory-bsc-1.patch ++++++
From 002543df392f65d95dbc127dc058ac897f2035ed Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Thu, 26 Sep 2019 10:41:06 +0100 Subject: [PATCH] Improve batch_async to release consumed memory (bsc#1140912)
--- salt/cli/batch_async.py | 73 +++++++++++++++++++++++++---------------- 1 file changed, 45 insertions(+), 28 deletions(-) diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py index 8a67331102..2bb50459c8 100644 --- a/salt/cli/batch_async.py +++ b/salt/cli/batch_async.py @@ -5,6 +5,7 @@ Execute a job on the targeted minions by using a moving window of fixed size `ba # Import python libs from __future__ import absolute_import, print_function, unicode_literals +import gc import tornado # Import salt libs @@ -77,6 +78,7 @@ class BatchAsync(object): self.batch_jid = jid_gen() self.find_job_jid = jid_gen() self.find_job_returned = set() + self.ended = False self.event = salt.utils.event.get_event( 'master', self.opts['sock_dir'], @@ -86,6 +88,7 @@ class BatchAsync(object): io_loop=ioloop, keep_loop=True) self.scheduled = False + self.patterns = {} def __set_event_handler(self): ping_return_pattern = 'salt/job/{0}/ret/*'.format(self.ping_jid) @@ -116,7 +119,7 @@ class BatchAsync(object): if minion in self.active: self.active.remove(minion) self.done_minions.add(minion) - self.schedule_next() + self.event.io_loop.spawn_callback(self.schedule_next) def _get_next(self): to_run = self.minions.difference( @@ -129,23 +132,23 @@ class BatchAsync(object): ) return set(list(to_run)[:next_batch_size]) - @tornado.gen.coroutine def check_find_job(self, batch_minions, jid): - find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid) - self.event.unsubscribe(find_job_return_pattern, match_type='glob') - self.patterns.remove((find_job_return_pattern, "find_job_return")) + if self.event: + find_job_return_pattern = 'salt/job/{0}/ret/*'.format(jid) + self.event.unsubscribe(find_job_return_pattern, match_type='glob') + self.patterns.remove((find_job_return_pattern, "find_job_return")) - timedout_minions = batch_minions.difference(self.find_job_returned).difference(self.done_minions) - self.timedout_minions = self.timedout_minions.union(timedout_minions) - self.active = self.active.difference(self.timedout_minions) - running = batch_minions.difference(self.done_minions).difference(self.timedout_minions) + timedout_minions = batch_minions.difference(self.find_job_returned).difference(self.done_minions) + self.timedout_minions = self.timedout_minions.union(timedout_minions) + self.active = self.active.difference(self.timedout_minions) + running = batch_minions.difference(self.done_minions).difference(self.timedout_minions) - if timedout_minions: - self.schedule_next() + if timedout_minions: + self.schedule_next() - if running: - self.find_job_returned = self.find_job_returned.difference(running) - self.event.io_loop.add_callback(self.find_job, running) + if running: + self.find_job_returned = self.find_job_returned.difference(running) + self.event.io_loop.spawn_callback(self.find_job, running) @tornado.gen.coroutine def find_job(self, minions): @@ -165,8 +168,8 @@ class BatchAsync(object): gather_job_timeout=self.opts['gather_job_timeout'], jid=jid, **self.eauth) - self.event.io_loop.call_later( - self.opts['gather_job_timeout'], + yield tornado.gen.sleep(self.opts['gather_job_timeout']) + self.event.io_loop.spawn_callback( self.check_find_job, not_done, jid) @@ -174,10 +177,6 @@ class BatchAsync(object): @tornado.gen.coroutine def start(self): self.__set_event_handler() - #start batching even if not all minions respond to ping - self.event.io_loop.call_later( - self.batch_presence_ping_timeout or self.opts['gather_job_timeout'], - self.start_batch) ping_return = yield self.local.run_job_async( self.opts['tgt'], 'test.ping', @@ -191,6 +190,10 @@ class BatchAsync(object): metadata=self.metadata, **self.eauth) self.targeted_minions = set(ping_return['minions']) + #start batching even if not all minions respond to ping + yield tornado.gen.sleep(self.batch_presence_ping_timeout or self.opts['gather_job_timeout']) + self.event.io_loop.spawn_callback(self.start_batch) + @tornado.gen.coroutine def start_batch(self): @@ -202,12 +205,14 @@ class BatchAsync(object): "down_minions": self.targeted_minions.difference(self.minions), "metadata": self.metadata } - self.event.fire_event(data, "salt/batch/{0}/start".format(self.batch_jid)) - yield self.run_next() + ret = self.event.fire_event(data, "salt/batch/{0}/start".format(self.batch_jid)) + self.event.io_loop.spawn_callback(self.run_next) + @tornado.gen.coroutine def end_batch(self): left = self.minions.symmetric_difference(self.done_minions.union(self.timedout_minions)) - if not left: + if not left and not self.ended: + self.ended = True data = { "available_minions": self.minions, "down_minions": self.targeted_minions.difference(self.minions), @@ -220,20 +225,26 @@ class BatchAsync(object): for (pattern, label) in self.patterns: if label in ["ping_return", "batch_run"]: self.event.unsubscribe(pattern, match_type='glob') + del self + gc.collect() + yield + @tornado.gen.coroutine def schedule_next(self): if not self.scheduled: self.scheduled = True # call later so that we maybe gather more returns - self.event.io_loop.call_later(self.batch_delay, self.run_next) + yield tornado.gen.sleep(self.batch_delay) + self.event.io_loop.spawn_callback(self.run_next) @tornado.gen.coroutine def run_next(self): + self.scheduled = False next_batch = self._get_next() if next_batch: self.active = self.active.union(next_batch) try: - yield self.local.run_job_async( + ret = yield self.local.run_job_async( next_batch, self.opts['fun'], self.opts['arg'], @@ -244,11 +255,17 @@ class BatchAsync(object): jid=self.batch_jid, metadata=self.metadata) - self.event.io_loop.call_later(self.opts['timeout'], self.find_job, set(next_batch)) + yield tornado.gen.sleep(self.opts['timeout']) + self.event.io_loop.spawn_callback(self.find_job, set(next_batch)) except Exception as ex: log.error("Error in scheduling next batch: %s", ex) self.active = self.active.difference(next_batch) else: - self.end_batch() - self.scheduled = False + yield self.end_batch() + gc.collect() yield + + def __del__(self): + self.event = None + self.ioloop = None + gc.collect() -- 2.22.0 ++++++ include-aliases-in-the-fqdns-grains.patch ++++++
From 5dc6f2a59a8a774d13dcfd36b25ea735df18f10f Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Tue, 29 Jan 2019 11:11:38 +0100 Subject: [PATCH] Include aliases in the fqdns grains
Add UT for "is_fqdn" Add "is_fqdn" check to the network utils Bugfix: include FQDNs aliases Deprecate UnitTest assertion in favour of built-in assert keyword Add UT for fqdns aliases Leverage cached interfaces, if any. --- salt/grains/core.py | 12 +++++------- salt/utils/network.py | 12 ++++++++++++ tests/unit/grains/test_core.py | 28 +++++++++++++++++++++++++--- tests/unit/utils/test_network.py | 19 +++++++++++++++++++ 4 files changed, 61 insertions(+), 10 deletions(-) diff --git a/salt/grains/core.py b/salt/grains/core.py index b0c1acceeb..05a9d5035d 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -2200,14 +2200,13 @@ def fqdns(): grains = {} fqdns = set() - addresses = salt.utils.network.ip_addrs(include_loopback=False, - interface_data=_INTERFACES) - addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, - interface_data=_INTERFACES)) + addresses = salt.utils.network.ip_addrs(include_loopback=False, interface_data=_get_interfaces()) + addresses.extend(salt.utils.network.ip_addrs6(include_loopback=False, interface_data=_get_interfaces())) err_message = 'Exception during resolving address: %s' for ip in addresses: try: - fqdns.add(socket.getfqdn(socket.gethostbyaddr(ip)[0])) + name, aliaslist, addresslist = socket.gethostbyaddr(ip) + fqdns.update([socket.getfqdn(name)] + [als for als in aliaslist if salt.utils.network.is_fqdn(als)]) except socket.herror as err: if err.errno == 0: # No FQDN for this IP address, so we don't need to know this all the time. @@ -2217,8 +2216,7 @@ def fqdns(): except (socket.error, socket.gaierror, socket.timeout) as err: log.error(err_message, err) - grains['fqdns'] = sorted(list(fqdns)) - return grains + return {"fqdns": sorted(list(fqdns))} def ip_fqdn(): diff --git a/salt/utils/network.py b/salt/utils/network.py index 83269cdcf6..c72d2aec41 100644 --- a/salt/utils/network.py +++ b/salt/utils/network.py @@ -2016,3 +2016,15 @@ def parse_host_port(host_port): raise ValueError('bad hostname: "{}"'.format(host)) return host, port + + +def is_fqdn(hostname): + """ + Verify if hostname conforms to be a FQDN. + + :param hostname: text string with the name of the host + :return: bool, True if hostname is correct FQDN, False otherwise + """ + + compliant = re.compile(r"(?!-)[A-Z\d\-\_]{1,63}(?<!-)$", re.IGNORECASE) + return "." in hostname and len(hostname) < 0xff and all(compliant.match(x) for x in hostname.rstrip(".").split(".")) diff --git a/tests/unit/grains/test_core.py b/tests/unit/grains/test_core.py index d5a1b1a36b..117e02c39f 100644 --- a/tests/unit/grains/test_core.py +++ b/tests/unit/grains/test_core.py @@ -863,10 +863,32 @@ class CoreGrainsTestCase(TestCase, LoaderModuleMockMixin): ret = {'fqdns': ['bluesniff.foo.bar', 'foo.bar.baz', 'rinzler.evil-corp.com']} with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock): fqdns = core.fqdns() - self.assertIn('fqdns', fqdns) - self.assertEqual(len(fqdns['fqdns']), len(ret['fqdns'])) - self.assertEqual(set(fqdns['fqdns']), set(ret['fqdns'])) + assert "fqdns" in fqdns + assert len(fqdns['fqdns']) == len(ret['fqdns']) + assert set(fqdns['fqdns']) == set(ret['fqdns']) + @skipIf(not salt.utils.platform.is_linux(), 'System is not Linux') + @patch.object(salt.utils.platform, 'is_windows', MagicMock(return_value=False)) + @patch('salt.utils.network.ip_addrs', MagicMock(return_value=['1.2.3.4', '5.6.7.8'])) + @patch('salt.utils.network.ip_addrs6', + MagicMock(return_value=['fe80::a8b2:93ff:fe00:0', 'fe80::a8b2:93ff:dead:beef'])) + @patch('salt.utils.network.socket.getfqdn', MagicMock(side_effect=lambda v: v)) # Just pass-through + def test_fqdns_aliases(self): + ''' + FQDNs aliases + ''' + reverse_resolv_mock = [('foo.bar.baz', ["throwmeaway", "this.is.valid.alias"], ['1.2.3.4']), + ('rinzler.evil-corp.com', ["false-hostname", "badaliass"], ['5.6.7.8']), + ('foo.bar.baz', [], ['fe80::a8b2:93ff:fe00:0']), + ('bluesniff.foo.bar', ["alias.bluesniff.foo.bar"], ['fe80::a8b2:93ff:dead:beef'])] + with patch.object(socket, 'gethostbyaddr', side_effect=reverse_resolv_mock): + fqdns = core.fqdns() + assert "fqdns" in fqdns + for alias in ["this.is.valid.alias", "alias.bluesniff.foo.bar"]: + assert alias in fqdns["fqdns"] + + for alias in ["throwmeaway", "false-hostname", "badaliass"]: + assert alias not in fqdns["fqdns"] def test_core_virtual(self): ''' test virtual grain with cmd virt-what diff --git a/tests/unit/utils/test_network.py b/tests/unit/utils/test_network.py index 3d20c880bd..ca627777a7 100644 --- a/tests/unit/utils/test_network.py +++ b/tests/unit/utils/test_network.py @@ -637,3 +637,22 @@ class NetworkTestCase(TestCase): # An exception is raised if unicode is passed to socket.getfqdn minion_id = network.generate_minion_id() assert minion_id != '', minion_id + + def test_is_fqdn(self): + """ + Test is_fqdn function passes possible FQDN names. + + :return: None + """ + for fqdn in ["host.domain.com", "something.with.the.dots.still.ok", "UPPERCASE.ALSO.SHOULD.WORK", + "MiXeD.CaSe.AcCePtAbLe", "123.host.com", "host123.com", "some_underscore.com", "host-here.com"]: + assert network.is_fqdn(fqdn) + + def test_is_not_fqdn(self): + """ + Test is_fqdn function rejects FQDN names. + + :return: None + """ + for fqdn in ["hostname", "/some/path", "$variable.here", "verylonghostname.{}".format("domain" * 45)]: + assert not network.is_fqdn(fqdn) -- 2.20.1 ++++++ integration-of-msi-authentication-with-azurearm-clou.patch ++++++
From 216342f03940080176111f5e0e31b43cd909e164 Mon Sep 17 00:00:00 2001 From: ed lane <ed.lane.0@gmail.com> Date: Thu, 30 Aug 2018 06:07:08 -0600 Subject: [PATCH] Integration of MSI authentication with azurearm cloud driver (#105)
--- salt/cloud/clouds/azurearm.py | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/salt/cloud/clouds/azurearm.py b/salt/cloud/clouds/azurearm.py index e8050dca16..229412adcd 100644 --- a/salt/cloud/clouds/azurearm.py +++ b/salt/cloud/clouds/azurearm.py @@ -58,6 +58,9 @@ The Azure ARM cloud module is used to control access to Microsoft Azure Resource virtual machine type will be "Windows". Only set this parameter on profiles which install Windows operating systems. + if using MSI-style authentication: + * ``subscription_id`` + Example ``/etc/salt/cloud.providers`` or ``/etc/salt/cloud.providers.d/azure.conf`` configuration: @@ -258,7 +261,8 @@ def get_configured_provider(): provider = __is_provider_configured( __opts__, __active_provider_name__ or __virtualname__, - ('subscription_id', 'username', 'password') + required_keys=('subscription_id', 'username', 'password'), + log_message=False ) return provider @@ -301,6 +305,7 @@ def get_conn(client_type): ) if tenant is not None: + # using Service Principle style authentication... client_id = config.get_cloud_config_value( 'client_id', get_configured_provider(), __opts__, search_global=False -- 2.17.1 ++++++ let-salt-ssh-use-platform-python-binary-in-rhel8-191.patch ++++++
From 8435f831147dfb9f936ea9ffcc898756da08995b Mon Sep 17 00:00:00 2001 From: Can Bulut Bayburt <1103552+cbbayburt@users.noreply.github.com> Date: Wed, 4 Dec 2019 15:59:46 +0100 Subject: [PATCH] Let salt-ssh use 'platform-python' binary in RHEL8 (#191)
RHEL/CentOS 8 has an internal Python interpreter called 'platform-python' included in the base setup. Add this binary to the list of Python executables to look for when creating the sh shim. --- salt/client/ssh/__init__.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/client/ssh/__init__.py b/salt/client/ssh/__init__.py index 0df918d634..d5bc6e5c27 100644 --- a/salt/client/ssh/__init__.py +++ b/salt/client/ssh/__init__.py @@ -147,7 +147,7 @@ elif [ "$SUDO" ] && [ -n "$SUDO_USER" ] then SUDO="sudo " fi EX_PYTHON_INVALID={EX_THIN_PYTHON_INVALID} -PYTHON_CMDS="python3 python27 python2.7 python26 python2.6 python2 python" +PYTHON_CMDS="python3 /usr/libexec/platform-python python27 python2.7 python26 python2.6 python2 python" for py_cmd in $PYTHON_CMDS do if command -v "$py_cmd" >/dev/null 2>&1 && "$py_cmd" -c "import sys; sys.exit(not (sys.version_info >= (2, 6)));" -- 2.23.0 ++++++ list_downloaded-for-apt-module.patch ++++++
From 587026f65d1043ec930b908d4184aea0adc2d864 Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Thu, 9 Jan 2020 10:11:13 +0100 Subject: [PATCH] list_downloaded for apt Module
--- salt/modules/aptpkg.py | 41 +++++++++++++++++++++++++++++++++++++++ salt/states/pkg.py | 4 ++-- tests/unit/modules/test_aptpkg.py | 29 +++++++++++++++++++++++++++ 3 files changed, 72 insertions(+), 2 deletions(-) diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py index 1a60255a1d..023049b2af 100644 --- a/salt/modules/aptpkg.py +++ b/salt/modules/aptpkg.py @@ -18,6 +18,9 @@ import os import re import logging import time +import fnmatch +import datetime + # Import third party libs # pylint: disable=no-name-in-module,import-error,redefined-builtin @@ -422,6 +425,7 @@ def install(name=None, pkgs=None, sources=None, reinstall=False, + downloadonly=False, ignore_epoch=False, **kwargs): ''' @@ -768,6 +772,9 @@ def install(name=None, cmd.extend(downgrade) cmds.append(cmd) + if downloadonly: + cmd.append("--download-only") + if to_reinstall: all_pkgs.extend(to_reinstall) cmd = copy.deepcopy(cmd_prefix) @@ -2917,3 +2924,37 @@ def _get_http_proxy_url(): ) return http_proxy_url + + +def list_downloaded(root=None, **kwargs): + ''' + .. versionadded:: 3000? + + List prefetched packages downloaded by apt in the local disk. + + root + operate on a different root directory. + + CLI example: + + .. code-block:: bash + + salt '*' pkg.list_downloaded + ''' + CACHE_DIR = '/var/cache/apt' + if root: + CACHE_DIR = os.path.join(root, os.path.relpath(CACHE_DIR, os.path.sep)) + + ret = {} + for root, dirnames, filenames in salt.utils.path.os_walk(CACHE_DIR): + for filename in fnmatch.filter(filenames, '*.deb'): + package_path = os.path.join(root, filename) + pkg_info = __salt__['lowpkg.bin_pkg_info'](package_path) + pkg_timestamp = int(os.path.getctime(package_path)) + ret.setdefault(pkg_info['name'], {})[pkg_info['version']] = { + 'path': package_path, + 'size': os.path.getsize(package_path), + 'creation_date_time_t': pkg_timestamp, + 'creation_date_time': datetime.datetime.utcfromtimestamp(pkg_timestamp).isoformat(), + } + return ret diff --git a/salt/states/pkg.py b/salt/states/pkg.py index 22a97fe98c..be00498135 100644 --- a/salt/states/pkg.py +++ b/salt/states/pkg.py @@ -1975,7 +1975,7 @@ def downloaded(name, (if specified). Currently supported for the following pkg providers: - :mod:`yumpkg <salt.modules.yumpkg>` and :mod:`zypper <salt.modules.zypper>` + :mod:`yumpkg <salt.modules.yumpkg>`, :mod:`zypper <salt.modules.zypper>` and :mod:`zypper <salt.modules.aptpkg>` :param str name: The name of the package to be downloaded. This parameter is ignored if @@ -2114,7 +2114,7 @@ def downloaded(name, if not ret['changes'] and not ret['comment']: ret['result'] = True - ret['comment'] = 'Packages are already downloaded: ' \ + ret['comment'] = 'Packages downloaded: ' \ '{0}'.format(', '.join(targets)) return ret diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py index bc6b610d86..5c7e38eae7 100644 --- a/tests/unit/modules/test_aptpkg.py +++ b/tests/unit/modules/test_aptpkg.py @@ -413,14 +413,17 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin): with patch.multiple(aptpkg, **patch_kwargs): aptpkg.upgrade() args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args[0] if "--download-only" in args] + # Here we shouldn't see the parameter and args_matching should be empty. self.assertFalse(any(args_matching)) aptpkg.upgrade(downloadonly=True) args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args[0] if "--download-only" in args] + # --download-only should be in the args list and we should have at least on True in the list. self.assertTrue(any(args_matching)) aptpkg.upgrade(download_only=True) args_matching = [True for args in patch_kwargs['__salt__']['cmd.run_all'].call_args[0] if "--download-only" in args] + # --download-only should be in the args list and we should have at least on True in the list. self.assertTrue(any(args_matching)) def test_show(self): @@ -545,6 +548,32 @@ class AptPkgTestCase(TestCase, LoaderModuleMockMixin): self.assert_called_once(refresh_mock) refresh_mock.reset_mock() + @patch('salt.utils.path.os_walk', MagicMock(return_value=[('test', 'test', 'test')])) + @patch('os.path.getsize', MagicMock(return_value=123456)) + @patch('os.path.getctime', MagicMock(return_value=1234567890.123456)) + @patch('fnmatch.filter', MagicMock(return_value=['/var/cache/apt/archive/test_package.rpm'])) + def test_list_downloaded(self): + ''' + Test downloaded packages listing. + :return: + ''' + DOWNLOADED_RET = { + 'test-package': { + '1.0': { + 'path': '/var/cache/apt/archive/test_package.rpm', + 'size': 123456, + 'creation_date_time_t': 1234567890, + 'creation_date_time': '2009-02-13T23:31:30', + } + } + } + + with patch.dict(aptpkg.__salt__, {'lowpkg.bin_pkg_info': MagicMock(return_value={'name': 'test-package', + 'version': '1.0'})}): + list_downloaded = aptpkg.list_downloaded() + self.assertEqual(len(list_downloaded), 1) + self.assertDictEqual(list_downloaded, DOWNLOADED_RET) + @skipIf(pytest is None, 'PyTest is missing') class AptUtilsTestCase(TestCase, LoaderModuleMockMixin): -- 2.16.4 ++++++ loosen-azure-sdk-dependencies-in-azurearm-cloud-driv.patch ++++++
From 8fe82178247ff3704915b578398ea55b0c6e4fa0 Mon Sep 17 00:00:00 2001 From: Joachim Gleissner <jgleissner@suse.com> Date: Tue, 18 Sep 2018 15:07:13 +0200 Subject: [PATCH] loosen azure sdk dependencies in azurearm cloud driver
Remove dependency to azure-cli, which is not used at all. Use azure-storage-sdk as fallback if multiapi version is not available. remove unused import from azurearm driver --- salt/cloud/clouds/azurearm.py | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/salt/cloud/clouds/azurearm.py b/salt/cloud/clouds/azurearm.py index 229412adcd..ac59467fb3 100644 --- a/salt/cloud/clouds/azurearm.py +++ b/salt/cloud/clouds/azurearm.py @@ -104,6 +104,7 @@ import time # Salt libs from salt.ext import six +import pkgutil import salt.cache import salt.config as config import salt.loader @@ -126,6 +127,11 @@ try: import azure.mgmt.network.models as network_models from azure.storage.blob.blockblobservice import BlockBlobService from msrestazure.azure_exceptions import CloudError + if pkgutil.find_loader('azure.multiapi'): + # use multiapi version if available + from azure.multiapi.storage.v2016_05_31 import CloudStorageAccount + else: + from azure.storage import CloudStorageAccount HAS_LIBS = True except ImportError: pass -- 2.17.1 ++++++ make-aptpkg.list_repos-compatible-on-enabled-disable.patch ++++++
From 350b0aa4ead80ac50047c08121bc09bddc05341d Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Fri, 16 Nov 2018 10:54:12 +0100 Subject: [PATCH] Make aptpkg.list_repos compatible on enabled/disabled output
--- salt/modules/aptpkg.py | 1 + 1 file changed, 1 insertion(+) diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py index 175ef2ed06..90b99c44b9 100644 --- a/salt/modules/aptpkg.py +++ b/salt/modules/aptpkg.py @@ -1719,6 +1719,7 @@ def list_repos(): repo['file'] = source.file repo['comps'] = getattr(source, 'comps', []) repo['disabled'] = source.disabled + repo['enabled'] = not repo['disabled'] # This is for compatibility with the other modules repo['dist'] = source.dist repo['type'] = source.type repo['uri'] = source.uri.rstrip('/') -- 2.19.1 ++++++ make-profiles-a-package.patch ++++++
From 155aa52dca9272db492990ad737256dada1c4364 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Mon, 8 Oct 2018 17:52:07 +0200 Subject: [PATCH] Make profiles a package.
Add UTF-8 encoding Add a docstring --- salt/cli/support/profiles/__init__.py | 4 ++++ 1 file changed, 4 insertions(+) create mode 100644 salt/cli/support/profiles/__init__.py diff --git a/salt/cli/support/profiles/__init__.py b/salt/cli/support/profiles/__init__.py new file mode 100644 index 0000000000..b86aef30b8 --- /dev/null +++ b/salt/cli/support/profiles/__init__.py @@ -0,0 +1,4 @@ +# coding=utf-8 +''' +Profiles for salt-support. +''' -- 2.19.0 ++++++ mount-fix-extra-t-parameter.patch ++++++
From 215d8d9c8f872b510a1c3fbb19ab4e91bc96bb64 Mon Sep 17 00:00:00 2001 From: Alberto Planas <aplanas@gmail.com> Date: Thu, 28 Feb 2019 15:45:28 +0100 Subject: [PATCH] mount: fix extra -t parameter
If 'fstype' parameter is not set in Linux environments, salt will build a mount command with an empty -t value, making the command fail. --- salt/modules/mount.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/salt/modules/mount.py b/salt/modules/mount.py index 4ba370e5b3..e807b1729e 100644 --- a/salt/modules/mount.py +++ b/salt/modules/mount.py @@ -1218,7 +1218,8 @@ def mount(name, device, mkmnt=False, fstype='', opts='defaults', user=None, util if fstype: args += ' -v {0}'.format(fstype) else: - args += ' -t {0}'.format(fstype) + if fstype: + args += ' -t {0}'.format(fstype) cmd = 'mount {0} {1} {2} '.format(args, device, name) out = __salt__['cmd.run_all'](cmd, runas=user, python_shell=False) if out['retcode']: -- 2.20.1 ++++++ move-server_id-deprecation-warning-to-reduce-log-spa.patch ++++++
From dab9967f8e4a67e5b7ddd4e6718414d2e9b25e42 Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Fri, 14 Jun 2019 15:13:12 +0200 Subject: [PATCH] Move server_id deprecation warning to reduce log spamming (bsc#1135567) (bsc#1135732)
--- salt/grains/core.py | 4 ---- salt/minion.py | 9 +++++++++ 2 files changed, 9 insertions(+), 4 deletions(-) diff --git a/salt/grains/core.py b/salt/grains/core.py index ce64620a24..e54212edfb 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -2812,10 +2812,6 @@ def get_server_id(): if bool(use_crc): id_hash = getattr(zlib, use_crc, zlib.adler32)(__opts__.get('id', '').encode()) & 0xffffffff else: - salt.utils.versions.warn_until('Sodium', 'This server_id is computed nor by Adler32 neither by CRC32. ' - 'Please use "server_id_use_crc" option and define algorithm you' - 'prefer (default "Adler32"). The server_id will be computed with' - 'Adler32 by default.') id_hash = _get_hash_by_shell() server_id = {'server_id': id_hash} diff --git a/salt/minion.py b/salt/minion.py index 058b7ef6b8..97f74bf47e 100644 --- a/salt/minion.py +++ b/salt/minion.py @@ -103,6 +103,7 @@ from salt.utils.odict import OrderedDict from salt.utils.process import (default_signals, SignalHandlingMultiprocessingProcess, ProcessManager) +from salt.utils.versions import warn_until from salt.exceptions import ( CommandExecutionError, CommandNotFoundError, @@ -992,6 +993,14 @@ class MinionManager(MinionBase): if (self.opts['master_type'] in ('failover', 'distributed')) or not isinstance(self.opts['master'], list): masters = [masters] + if not self.opts.get('server_id_use_crc'): + warn_until( + 'Sodium', + 'This server_id is computed nor by Adler32 neither by CRC32. ' + 'Please use "server_id_use_crc" option and define algorithm you' + 'prefer (default "Adler32"). The server_id will be computed with' + 'Adler32 by default.') + for master in masters: s_opts = copy.deepcopy(self.opts) s_opts['master'] = master -- 2.22.0 ++++++ move-tokens-in-place-with-an-atomic-operation.patch ++++++
From 1a15b40889ebd7aa5831997e12e497fad736544d Mon Sep 17 00:00:00 2001 From: "Daniel A. Wozniak" <dwozniak@saltstack.com> Date: Mon, 2 Sep 2019 00:03:27 +0000 Subject: [PATCH] Move tokens in place with an atomic operation
--- salt/tokens/localfs.py | 4 ++- tests/unit/tokens/__init__.py | 1 + tests/unit/tokens/test_localfs.py | 53 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 57 insertions(+), 1 deletion(-) create mode 100644 tests/unit/tokens/__init__.py create mode 100644 tests/unit/tokens/test_localfs.py diff --git a/salt/tokens/localfs.py b/salt/tokens/localfs.py index 021bdb9e50..3660ee3186 100644 --- a/salt/tokens/localfs.py +++ b/salt/tokens/localfs.py @@ -34,6 +34,7 @@ def mk_token(opts, tdata): hash_type = getattr(hashlib, opts.get('hash_type', 'md5')) tok = six.text_type(hash_type(os.urandom(512)).hexdigest()) t_path = os.path.join(opts['token_dir'], tok) + temp_t_path = '{}.tmp'.format(t_path) while os.path.isfile(t_path): tok = six.text_type(hash_type(os.urandom(512)).hexdigest()) t_path = os.path.join(opts['token_dir'], tok) @@ -41,8 +42,9 @@ def mk_token(opts, tdata): serial = salt.payload.Serial(opts) try: with salt.utils.files.set_umask(0o177): - with salt.utils.files.fopen(t_path, 'w+b') as fp_: + with salt.utils.files.fopen(temp_t_path, 'w+b') as fp_: fp_.write(serial.dumps(tdata)) + os.rename(temp_t_path, t_path) except (IOError, OSError): log.warning( 'Authentication failure: can not write token file "%s".', t_path) diff --git a/tests/unit/tokens/__init__.py b/tests/unit/tokens/__init__.py new file mode 100644 index 0000000000..40a96afc6f --- /dev/null +++ b/tests/unit/tokens/__init__.py @@ -0,0 +1 @@ +# -*- coding: utf-8 -*- diff --git a/tests/unit/tokens/test_localfs.py b/tests/unit/tokens/test_localfs.py new file mode 100644 index 0000000000..f950091252 --- /dev/null +++ b/tests/unit/tokens/test_localfs.py @@ -0,0 +1,53 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, print_function, unicode_literals + +import os + +import salt.utils.files +import salt.tokens.localfs + +from tests.support.unit import TestCase, skipIf +from tests.support.helpers import with_tempdir +from tests.support.mock import NO_MOCK, NO_MOCK_REASON, patch + + +class CalledWith(object): + + def __init__(self, func, called_with=None): + self.func = func + if called_with is None: + self.called_with = [] + else: + self.called_with = called_with + + def __call__(self, *args, **kwargs): + self.called_with.append((args, kwargs)) + return self.func(*args, **kwargs) + + +@skipIf(NO_MOCK, NO_MOCK_REASON) +class WriteTokenTest(TestCase): + + @with_tempdir() + def test_write_token(self, tmpdir): + ''' + Validate tokens put in place with an atomic move + ''' + opts = { + 'token_dir': tmpdir + } + fopen = CalledWith(salt.utils.files.fopen) + rename = CalledWith(os.rename) + with patch('salt.utils.files.fopen', fopen), patch('os.rename', rename): + tdata = salt.tokens.localfs.mk_token(opts, {}) + assert 'token' in tdata + t_path = os.path.join(tmpdir, tdata['token']) + temp_t_path = '{}.tmp'.format(t_path) + assert len(fopen.called_with) == 1, len(fopen.called_with) + assert fopen.called_with == [ + ((temp_t_path, 'w+b'), {}) + ], fopen.called_with + assert len(rename.called_with) == 1, len(rename.called_with) + assert rename.called_with == [ + ((temp_t_path, t_path), {}) + ], rename.called_with -- 2.16.4 ++++++ preserve-already-defined-destructive_tests-and-expen.patch ++++++
From 5a1e0b7b8eab900e03fa800cc7a0a2b59bf2ff55 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Mon, 3 Jun 2019 11:38:36 +0100 Subject: [PATCH] Preserve already defined DESTRUCTIVE_TESTS and EXPENSIVE_TESTS env variables
--- tests/support/parser/__init__.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tests/support/parser/__init__.py b/tests/support/parser/__init__.py index ed262d46c0..f269457670 100644 --- a/tests/support/parser/__init__.py +++ b/tests/support/parser/__init__.py @@ -574,12 +574,12 @@ class SaltTestingParser(optparse.OptionParser): self.validate_options() - if self.support_destructive_tests_selection: + if self.support_destructive_tests_selection and not os.environ.get('DESTRUCTIVE_TESTS', None): # Set the required environment variable in order to know if # destructive tests should be executed or not. os.environ['DESTRUCTIVE_TESTS'] = str(self.options.run_destructive) - if self.support_expensive_tests_selection: + if self.support_expensive_tests_selection and not os.environ.get('EXPENSIVE_TESTS', None): # Set the required environment variable in order to know if # expensive tests should be executed or not. os.environ['EXPENSIVE_TESTS'] = str(self.options.run_expensive) -- 2.17.1 ++++++ preserving-signature-in-module.run-state-u-50049.patch ++++++
From 318b4e0cd2efb02f26392bfe2d354a3ff5d21cbc Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Mon, 15 Oct 2018 17:26:16 +0200 Subject: [PATCH] Preserving signature in "module.run" state (U#50049)
Add unit test for _call_function on signature aligning named arguments Add unit test for _call_function routine for unnamed positional arguments Remove redundant docstrings Add different test function signature with the same outcome Replace standalone function with lambda-proxy for signatures only --- salt/states/module.py | 7 +++++-- tests/unit/states/test_module.py | 27 +++++++++++++++++++++++++++ 2 files changed, 32 insertions(+), 2 deletions(-) diff --git a/salt/states/module.py b/salt/states/module.py index 2190ffa3d2..90b1d0a5f5 100644 --- a/salt/states/module.py +++ b/salt/states/module.py @@ -323,7 +323,7 @@ def _call_function(name, returner=None, **kwargs): # func_args is initialized to a list of positional arguments that the function to be run accepts func_args = argspec.args[:len(argspec.args or []) - len(argspec.defaults or [])] - arg_type, na_type, kw_type = [], {}, False + arg_type, kw_to_arg_type, na_type, kw_type = [], {}, {}, False for funcset in reversed(kwargs.get('func_args') or []): if not isinstance(funcset, dict): # We are just receiving a list of args to the function to be run, so just append @@ -334,13 +334,16 @@ def _call_function(name, returner=None, **kwargs): # We are going to pass in a keyword argument. The trick here is to make certain # that if we find that in the *args* list that we pass it there and not as a kwarg if kwarg_key in func_args: - arg_type.append(funcset[kwarg_key]) + kw_to_arg_type[kwarg_key] = funcset[kwarg_key] continue else: # Otherwise, we're good and just go ahead and pass the keyword/value pair into # the kwargs list to be run. func_kw.update(funcset) arg_type.reverse() + for arg in func_args: + if arg in kw_to_arg_type: + arg_type.append(kw_to_arg_type[arg]) _exp_prm = len(argspec.args or []) - len(argspec.defaults or []) _passed_prm = len(arg_type) missing = [] diff --git a/tests/unit/states/test_module.py b/tests/unit/states/test_module.py index bf4ddcc5b4..25082d4bb4 100644 --- a/tests/unit/states/test_module.py +++ b/tests/unit/states/test_module.py @@ -324,3 +324,30 @@ class ModuleStateTest(TestCase, LoaderModuleMockMixin): self.assertIn(comment, ret['comment']) self.assertIn('world', ret['comment']) self.assertIn('hello', ret['comment']) + + def test_call_function_named_args(self): + ''' + Test _call_function routine when params are named. Their position ordering should not matter. + + :return: + ''' + with patch.dict(module.__salt__, + {'testfunc': lambda a, b, c, *args, **kwargs: (a, b, c, args, kwargs)}, clear=True): + assert module._call_function('testfunc', func_args=[{'a': 1}, {'b': 2}, {'c': 3}]) == (1, 2, 3, (), {}) + assert module._call_function('testfunc', func_args=[{'c': 3}, {'a': 1}, {'b': 2}]) == (1, 2, 3, (), {}) + + with patch.dict(module.__salt__, + {'testfunc': lambda c, a, b, *args, **kwargs: (a, b, c, args, kwargs)}, clear=True): + assert module._call_function('testfunc', func_args=[{'a': 1}, {'b': 2}, {'c': 3}]) == (1, 2, 3, (), {}) + assert module._call_function('testfunc', func_args=[{'c': 3}, {'a': 1}, {'b': 2}]) == (1, 2, 3, (), {}) + + def test_call_function_ordered_args(self): + ''' + Test _call_function routine when params are not named. Their position should matter. + + :return: + ''' + with patch.dict(module.__salt__, + {'testfunc': lambda a, b, c, *args, **kwargs: (a, b, c, args, kwargs)}, clear=True): + assert module._call_function('testfunc', func_args=[1, 2, 3]) == (1, 2, 3, (), {}) + assert module._call_function('testfunc', func_args=[3, 1, 2]) == (3, 1, 2, (), {}) -- 2.19.0 ++++++ prevent-already-reading-continuous-exception-message.patch ++++++
From 6c84612b52b5f14e74a1c44f03d78a85c6f0c5dc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Tue, 29 Oct 2019 09:08:52 +0000 Subject: [PATCH] Prevent 'Already reading' continuous exception message (bsc#1137642)
--- salt/transport/ipc.py | 1 + 1 file changed, 1 insertion(+) diff --git a/salt/transport/ipc.py b/salt/transport/ipc.py index 8235f104ef..0ed0baeec2 100644 --- a/salt/transport/ipc.py +++ b/salt/transport/ipc.py @@ -770,6 +770,7 @@ class IPCMessageSubscriber(IPCClient): break except Exception as exc: log.error('Exception occurred while Subscriber handling stream: %s', exc) + yield tornado.gen.sleep(1) def __run_callbacks(self, raw): for callback in self.callbacks: -- 2.23.0 ++++++ prevent-ansiblegate-unit-tests-to-fail-on-ubuntu.patch ++++++
From 84e9371399b50618765038bcec2e313a006eadf9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Mon, 8 Jul 2019 14:46:10 +0100 Subject: [PATCH] Prevent ansiblegate unit tests to fail on Ubuntu
--- tests/unit/modules/test_ansiblegate.py | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/tests/unit/modules/test_ansiblegate.py b/tests/unit/modules/test_ansiblegate.py index 70b47f8bc2..2a24d6f147 100644 --- a/tests/unit/modules/test_ansiblegate.py +++ b/tests/unit/modules/test_ansiblegate.py @@ -172,9 +172,11 @@ description: with patch('salt.utils.timed_subprocess.TimedProc', proc): ret = _ansible_module_caller.call("one.two.three", "arg_1", kwarg1="foobar") if six.PY3: - proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"kwarg1": "foobar", "_raw_params": "arg_1"}}'], stdout=-1, timeout=1200) proc.assert_any_call(['python3', 'foofile'], stdin=ANSIBLE_MODULE_ARGS, stdout=-1, timeout=1200) else: - proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"_raw_params": "arg_1", "kwarg1": "foobar"}}'], stdout=-1, timeout=1200) proc.assert_any_call(['python', 'foofile'], stdin=ANSIBLE_MODULE_ARGS, stdout=-1, timeout=1200) + try: + proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"kwarg1": "foobar", "_raw_params": "arg_1"}}'], stdout=-1, timeout=1200) + except AssertionError: + proc.assert_any_call(['echo', '{"ANSIBLE_MODULE_ARGS": {"_raw_params": "arg_1", "kwarg1": "foobar"}}'], stdout=-1, timeout=1200) assert ret == {"completed": True, "timeout": 1200} -- 2.21.0 ++++++ prevent-systemd-run-description-issue-when-running-a.patch ++++++
From 44a91c2ce6df78d93ce0ef659dedb0e41b1c2e04 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Mon, 30 Sep 2019 12:06:08 +0100 Subject: [PATCH] Prevent systemd-run description issue when running aptpkg (bsc#1152366)
--- salt/modules/aptpkg.py | 2 +- tests/unit/modules/test_aptpkg.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py index d49a48310e..a11bb51c16 100644 --- a/salt/modules/aptpkg.py +++ b/salt/modules/aptpkg.py @@ -165,7 +165,7 @@ def _call_apt(args, scope=True, **kwargs): ''' cmd = [] if scope and salt.utils.systemd.has_scope(__context__) and __salt__['config.get']('systemd.scope', True): - cmd.extend(['systemd-run', '--scope', '--description "{0}"'.format(__name__)]) + cmd.extend(['systemd-run', '--scope', '--description', '"{0}"'.format(__name__)]) cmd.extend(args) params = {'output_loglevel': 'trace', diff --git a/tests/unit/modules/test_aptpkg.py b/tests/unit/modules/test_aptpkg.py index 06f3a9f6aa..85360da181 100644 --- a/tests/unit/modules/test_aptpkg.py +++ b/tests/unit/modules/test_aptpkg.py @@ -544,7 +544,7 @@ class AptUtilsTestCase(TestCase, LoaderModuleMockMixin): with patch.dict(aptpkg.__salt__, {'cmd.run_all': MagicMock(), 'config.get': MagicMock(return_value=True)}): aptpkg._call_apt(['apt-get', 'purge', 'vim']) # pylint: disable=W0106 aptpkg.__salt__['cmd.run_all'].assert_called_once_with( - ['systemd-run', '--scope', '--description "salt.modules.aptpkg"', 'apt-get', 'purge', 'vim'], env={}, + ['systemd-run', '--scope', '--description', '"salt.modules.aptpkg"', 'apt-get', 'purge', 'vim'], env={}, output_loglevel='trace', python_shell=False) def test_call_apt_with_kwargs(self): -- 2.22.0 ++++++ prevent-test_mod_del_repo_multiline_values-to-fail.patch ++++++
From 5aa9768a438b6ce8f504dd2f98e6eaa090287eda Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Wed, 27 Nov 2019 15:41:57 +0000 Subject: [PATCH] Prevent test_mod_del_repo_multiline_values to fail
--- tests/integration/modules/test_pkg.py | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/tests/integration/modules/test_pkg.py b/tests/integration/modules/test_pkg.py index a82c9662c7..e206774c1a 100644 --- a/tests/integration/modules/test_pkg.py +++ b/tests/integration/modules/test_pkg.py @@ -132,6 +132,7 @@ class PkgModuleTest(ModuleCase, SaltReturnAssertsMixin): try: if os_grain in ['CentOS', 'RedHat', 'SUSE']: my_baseurl = 'http://my.fake.repo/foo/bar/\n http://my.fake.repo.alt/foo/bar/' + expected_get_repo_baseurl_zypp = 'http://my.fake.repo/foo/bar/%0A%20http://my.fake.repo.alt/foo/bar/' expected_get_repo_baseurl = 'http://my.fake.repo/foo/bar/\nhttp://my.fake.repo.alt/foo/bar/' major_release = int( self.run_function( @@ -156,17 +157,24 @@ class PkgModuleTest(ModuleCase, SaltReturnAssertsMixin): enabled=enabled, failovermethod=failovermethod, ) - # return data from pkg.mod_repo contains the file modified at - # the top level, so use next(iter(ret)) to get that key self.assertNotEqual(ret, {}) - repo_info = ret[next(iter(ret))] + repo_info = {repo: ret} self.assertIn(repo, repo_info) - self.assertEqual(repo_info[repo]['baseurl'], my_baseurl) + if os_grain == 'SUSE': + self.assertEqual(repo_info[repo]['baseurl'], expected_get_repo_baseurl_zypp) + else: + self.assertEqual(repo_info[repo]['baseurl'], my_baseurl) ret = self.run_function('pkg.get_repo', [repo]) - self.assertEqual(ret['baseurl'], expected_get_repo_baseurl) + if os_grain == 'SUSE': + self.assertEqual(repo_info[repo]['baseurl'], expected_get_repo_baseurl_zypp) + else: + self.assertEqual(ret['baseurl'], expected_get_repo_baseurl) self.run_function('pkg.mod_repo', [repo]) ret = self.run_function('pkg.get_repo', [repo]) - self.assertEqual(ret['baseurl'], expected_get_repo_baseurl) + if os_grain == 'SUSE': + self.assertEqual(repo_info[repo]['baseurl'], expected_get_repo_baseurl_zypp) + else: + self.assertEqual(ret['baseurl'], expected_get_repo_baseurl) finally: if repo is not None: self.run_function('pkg.del_repo', [repo]) -- 2.23.0 ++++++ provide-the-missing-features-required-for-yomi-yet-o.patch ++++++ ++++ 13325 lines (skipped) ++++++ read-repo-info-without-using-interpolation-bsc-11356.patch ++++++
From 2aaa8ac1e4c0bfcff24044c1ed811496935074ba Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Thu, 7 Nov 2019 15:11:49 +0100 Subject: [PATCH] Read repo info without using interpolation (bsc#1135656)
--- salt/modules/zypperpkg.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py index a87041aa70..8c1e05c21c 100644 --- a/salt/modules/zypperpkg.py +++ b/salt/modules/zypperpkg.py @@ -1043,7 +1043,7 @@ def _get_repo_info(alias, repos_cfg=None, root=None): Get one repo meta-data. ''' try: - meta = dict((repos_cfg or _get_configured_repos(root=root)).items(alias)) + meta = dict((repos_cfg or _get_configured_repos(root=root)).items(alias, raw=True)) meta['alias'] = alias for key, val in six.iteritems(meta): if val in ['0', '1']: -- 2.23.0 ++++++ remove-arch-from-name-when-pkg.list_pkgs-is-called-w.patch ++++++ ++++ 744 lines (skipped) ++++++ remove-unnecessary-yield-causing-badyielderror-bsc-1.patch ++++++
From 53d182abfbf7ab1156496481801e5e64e7f112e6 Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Wed, 30 Oct 2019 10:19:12 +0100 Subject: [PATCH] Remove unnecessary yield causing BadYieldError (bsc#1154620)
--- salt/cli/batch_async.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py index 6d0dca1da5..754c257b36 100644 --- a/salt/cli/batch_async.py +++ b/salt/cli/batch_async.py @@ -227,7 +227,6 @@ class BatchAsync(object): self.event.unsubscribe(pattern, match_type='glob') del self gc.collect() - yield @tornado.gen.coroutine def schedule_next(self): @@ -263,7 +262,6 @@ class BatchAsync(object): else: yield self.end_batch() gc.collect() - yield def __del__(self): self.local = None -- 2.23.0 ++++++ remove-virt.pool_delete-fast-parameter-178.patch ++++++
From 6dfe6e1370f330c0d300bf0effd7e6cf8a28c734 Mon Sep 17 00:00:00 2001 From: Cedric Bosdonnat <cbosdonnat@suse.com> Date: Wed, 30 Oct 2019 12:18:51 +0100 Subject: [PATCH] Remove virt.pool_delete fast parameter (#178)
There are two reasons to remove this parameter without deprecating it: * the meaning has been mistakenly inversed * fast=True will raise an exception for every libvirt storage backend since that flag has never been implemented in any of those. Fixes issue #54474 --- salt/modules/virt.py | 9 ++------- tests/unit/modules/test_virt.py | 17 +++++++++++++++++ 2 files changed, 19 insertions(+), 7 deletions(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index d01b6c3f1e..3abc140a00 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -4885,13 +4885,11 @@ def pool_undefine(name, **kwargs): conn.close() -def pool_delete(name, fast=True, **kwargs): +def pool_delete(name, **kwargs): ''' Delete the resources of a defined libvirt storage pool. :param name: libvirt storage pool name - :param fast: if set to False, zeroes out all the data. - Default value is True. :param connection: libvirt connection URI, overriding defaults :param username: username to connect with, overriding defaults :param password: password to connect with, overriding defaults @@ -4907,10 +4905,7 @@ def pool_delete(name, fast=True, **kwargs): conn = __get_conn(**kwargs) try: pool = conn.storagePoolLookupByName(name) - flags = libvirt.VIR_STORAGE_POOL_DELETE_NORMAL - if fast: - flags = libvirt.VIR_STORAGE_POOL_DELETE_ZEROED - return not bool(pool.delete(flags)) + return not bool(pool.delete(libvirt.VIR_STORAGE_POOL_DELETE_NORMAL)) finally: conn.close() diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index 4d20e998d8..b95f51807f 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -3006,3 +3006,20 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): self.assertEqual('vnc', graphics['type']) self.assertEqual('5900', graphics['port']) self.assertEqual('0.0.0.0', graphics['listen']) + + def test_pool_delete(self): + ''' + Test virt.pool_delete function + ''' + mock_pool = MagicMock() + mock_pool.delete = MagicMock(return_value=0) + self.mock_conn.storagePoolLookupByName = MagicMock(return_value=mock_pool) + + res = virt.pool_delete('test-pool') + self.assertTrue(res) + + self.mock_conn.storagePoolLookupByName.assert_called_once_with('test-pool') + + # Shouldn't be called with another parameter so far since those are not implemented + # and thus throwing exceptions. + mock_pool.delete.assert_called_once_with(self.mock_libvirt.VIR_STORAGE_POOL_DELETE_NORMAL) -- 2.23.0 ++++++ restore-default-behaviour-of-pkg-list-return.patch ++++++
From 56fd68474f399a36b0a74ca9a01890649d997792 Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Fri, 30 Aug 2019 14:20:06 +0200 Subject: [PATCH] Restore default behaviour of pkg list return
The default behaviour for pkg list return was to not include patches, even when installing patches. Only the packages where returned. There is now parameter to also return patches if that is needed. Co-authored-by: Mihai Dinca <mdinca@suse.de> --- salt/modules/zypperpkg.py | 32 +++++++++++++++++++++++--------- 1 file changed, 23 insertions(+), 9 deletions(-) diff --git a/salt/modules/zypperpkg.py b/salt/modules/zypperpkg.py index f71d6aac9e..da1953b2a5 100644 --- a/salt/modules/zypperpkg.py +++ b/salt/modules/zypperpkg.py @@ -1302,8 +1302,10 @@ def refresh_db(root=None): return ret -def _find_types(pkgs): +def _detect_includes(pkgs, inclusion_detection): '''Form a package names list, find prefixes of packages types.''' + if not inclusion_detection: + return None return sorted({pkg.split(':', 1)[0] for pkg in pkgs if len(pkg.split(':', 1)) == 2}) @@ -1319,6 +1321,7 @@ def install(name=None, ignore_repo_failure=False, no_recommends=False, root=None, + inclusion_detection=False, **kwargs): ''' .. versionchanged:: 2015.8.12,2016.3.3,2016.11.0 @@ -1433,6 +1436,9 @@ def install(name=None, .. versionadded:: 2018.3.0 + inclusion_detection: + Detect ``includes`` based on ``sources`` + By default packages are always included Returns a dict containing the new package names and versions:: @@ -1498,7 +1504,8 @@ def install(name=None, diff_attr = kwargs.get("diff_attr") - includes = _find_types(targets) + includes = _detect_includes(targets, inclusion_detection) + old = list_pkgs(attr=diff_attr, root=root, includes=includes) if not downloadonly else list_downloaded(root) downgrades = [] @@ -1688,7 +1695,7 @@ def upgrade(refresh=True, return ret -def _uninstall(name=None, pkgs=None, root=None): +def _uninstall(inclusion_detection, name=None, pkgs=None, root=None): ''' Remove and purge do identical things but with different Zypper commands, this function performs the common logic. @@ -1698,7 +1705,7 @@ def _uninstall(name=None, pkgs=None, root=None): except MinionError as exc: raise CommandExecutionError(exc) - includes = _find_types(pkg_params.keys()) + includes = _detect_includes(pkg_params.keys(), inclusion_detection) old = list_pkgs(root=root, includes=includes) targets = [] for target in pkg_params: @@ -1757,7 +1764,7 @@ def normalize_name(name): return name -def remove(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused-argument +def remove(name=None, pkgs=None, root=None, inclusion_detection=False, **kwargs): # pylint: disable=unused-argument ''' .. versionchanged:: 2015.8.12,2016.3.3,2016.11.0 On minions running systemd>=205, `systemd-run(1)`_ is now used to @@ -1788,8 +1795,11 @@ def remove(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused root Operate on a different root directory. - .. versionadded:: 0.16.0 + inclusion_detection: + Detect ``includes`` based on ``pkgs`` + By default packages are always included + .. versionadded:: 0.16.0 Returns a dict containing the changes. @@ -1801,10 +1811,10 @@ def remove(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused salt '*' pkg.remove <package1>,<package2>,<package3> salt '*' pkg.remove pkgs='["foo", "bar"]' ''' - return _uninstall(name=name, pkgs=pkgs, root=root) + return _uninstall(inclusion_detection, name=name, pkgs=pkgs, root=root) -def purge(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused-argument +def purge(name=None, pkgs=None, root=None, inclusion_detection=False, **kwargs): # pylint: disable=unused-argument ''' .. versionchanged:: 2015.8.12,2016.3.3,2016.11.0 On minions running systemd>=205, `systemd-run(1)`_ is now used to @@ -1836,6 +1846,10 @@ def purge(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused- root Operate on a different root directory. + inclusion_detection: + Detect ``includes`` based on ``pkgs`` + By default packages are always included + .. versionadded:: 0.16.0 @@ -1849,7 +1863,7 @@ def purge(name=None, pkgs=None, root=None, **kwargs): # pylint: disable=unused- salt '*' pkg.purge <package1>,<package2>,<package3> salt '*' pkg.purge pkgs='["foo", "bar"]' ''' - return _uninstall(name=name, pkgs=pkgs, root=root) + return _uninstall(inclusion_detection, name=name, pkgs=pkgs, root=root) def list_locks(root=None): -- 2.20.1 ++++++ restrict-the-start_event_grains-only-to-the-start-ev.patch ++++++
From e819518ec3ef48bc93f7514710ebfebe59dfd228 Mon Sep 17 00:00:00 2001 From: Abid Mehmood <amehmood@suse.de> Date: Thu, 16 Jan 2020 11:28:04 +0100 Subject: [PATCH] Restrict the 'start_event_grains' only to the start events
add test for custom events --- salt/minion.py | 11 ++++++++--- tests/unit/test_minion.py | 18 +++++++++++++++++- 2 files changed, 25 insertions(+), 4 deletions(-) diff --git a/salt/minion.py b/salt/minion.py index 4c7ea0491c..8f60195cb3 100644 --- a/salt/minion.py +++ b/salt/minion.py @@ -1424,7 +1424,7 @@ class Minion(MinionBase): finally: channel.close() - def _fire_master(self, data=None, tag=None, events=None, pretag=None, timeout=60, sync=True, timeout_handler=None): + def _fire_master(self, data=None, tag=None, events=None, pretag=None, timeout=60, sync=True, timeout_handler=None, include_startup_grains=False): ''' Fire an event on the master, or drop message if unable to send. ''' @@ -1443,7 +1443,7 @@ class Minion(MinionBase): else: return - if self.opts['start_event_grains']: + if include_startup_grains: grains_to_add = dict( [(k, v) for k, v in six.iteritems(self.opts.get('grains', {})) if k in self.opts['start_event_grains']]) load['grains'] = grains_to_add @@ -2157,6 +2157,9 @@ class Minion(MinionBase): }) def _fire_master_minion_start(self): + include_grains = False + if self.opts['start_event_grains']: + include_grains = True # Send an event to the master that the minion is live if self.opts['enable_legacy_startup_events']: # Old style event. Defaults to False in Sodium release. @@ -2165,7 +2168,8 @@ class Minion(MinionBase): self.opts['id'], time.asctime() ), - 'minion_start' + 'minion_start', + include_startup_grains=include_grains ) # send name spaced event self._fire_master( @@ -2174,6 +2178,7 @@ class Minion(MinionBase): time.asctime() ), tagify([self.opts['id'], 'start'], 'minion'), + include_startup_grains=include_grains ) def module_refresh(self, force_refresh=False, notify=False): diff --git a/tests/unit/test_minion.py b/tests/unit/test_minion.py index 7913b9cd01..4c57ab8f3f 100644 --- a/tests/unit/test_minion.py +++ b/tests/unit/test_minion.py @@ -314,7 +314,7 @@ class MinionTestCase(TestCase, AdaptedConfigurationTestCaseMixin): try: minion.tok = MagicMock() minion._send_req_sync = MagicMock() - minion._fire_master('Minion has started', 'minion_start') + minion._fire_master('Minion has started', 'minion_start', include_startup_grains=True) load = minion._send_req_sync.call_args[0][0] self.assertTrue('grains' in load) @@ -337,6 +337,22 @@ class MinionTestCase(TestCase, AdaptedConfigurationTestCaseMixin): finally: minion.destroy() + def test_when_other_events_fired_and_start_event_grains_are_set(self): + mock_opts = self.get_config('minion', from_scratch=True) + mock_opts['start_event_grains'] = ["os"] + io_loop = tornado.ioloop.IOLoop() + io_loop.make_current() + minion = salt.minion.Minion(mock_opts, io_loop=io_loop) + try: + minion.tok = MagicMock() + minion._send_req_sync = MagicMock() + minion._fire_master('Custm_event_fired', 'custom_event') + load = minion._send_req_sync.call_args[0][0] + + self.assertTrue('grains' not in load) + finally: + minion.destroy() + def test_minion_retry_dns_count(self): ''' Tests that the resolve_dns will retry dns look ups for a maximum of -- 2.16.4 ++++++ return-the-expected-powerpc-os-arch-bsc-1117995.patch ++++++
From 2cbc403b422a699cd948ed6218fce28fa901f5fa Mon Sep 17 00:00:00 2001 From: Mihai Dinca <mdinca@suse.de> Date: Thu, 13 Dec 2018 12:17:35 +0100 Subject: [PATCH] Return the expected powerpc os arch (bsc#1117995)
--- salt/utils/pkg/rpm.py | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/salt/utils/pkg/rpm.py b/salt/utils/pkg/rpm.py index bb8c3fb589..828b0cecda 100644 --- a/salt/utils/pkg/rpm.py +++ b/salt/utils/pkg/rpm.py @@ -53,8 +53,11 @@ def get_osarch(): stderr=subprocess.PIPE).communicate()[0] else: ret = ''.join(list(filter(None, platform.uname()[-2:]))[-1:]) - - return salt.utils.stringutils.to_str(ret).strip() or 'unknown' + ret = salt.utils.stringutils.to_str(ret).strip() or 'unknown' + ARCH_FIXES_MAPPING = { + "powerpc64le": "ppc64le" + } + return ARCH_FIXES_MAPPING.get(ret, ret) def check_32(arch, osarch=None): -- 2.20.1 ++++++ run-salt-api-as-user-salt-bsc-1064520.patch ++++++
From e9b5c0ae02552eb9a76488da32217a0e339d86a2 Mon Sep 17 00:00:00 2001 From: Christian Lanig <clanig@suse.com> Date: Mon, 27 Nov 2017 13:10:26 +0100 Subject: [PATCH] Run salt-api as user salt (bsc#1064520)
--- pkg/salt-api.service | 1 + 1 file changed, 1 insertion(+) diff --git a/pkg/salt-api.service b/pkg/salt-api.service index 7ca582dfb4..bf513e4dbd 100644 --- a/pkg/salt-api.service +++ b/pkg/salt-api.service @@ -6,6 +6,7 @@ After=network.target [Service] Type=notify NotifyAccess=all +User=salt LimitNOFILE=8192 ExecStart=/usr/bin/salt-api TimeoutStopSec=3 -- 2.13.7 ++++++ run-salt-master-as-dedicated-salt-user.patch ++++++
From 3d4be53c265dffdbfaf1d7d4764c361a640fd5ff Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Klaus=20K=C3=A4mpf?= <kkaempf@suse.de> Date: Wed, 20 Jan 2016 11:01:06 +0100 Subject: [PATCH] Run salt master as dedicated salt user
* Minion runs always as a root --- conf/master | 3 ++- pkg/salt-common.logrotate | 2 ++ 2 files changed, 4 insertions(+), 1 deletion(-) diff --git a/conf/master b/conf/master index 149fe8812f..d492aef6df 100644 --- a/conf/master +++ b/conf/master @@ -25,7 +25,8 @@ # permissions to allow the specified user to run the master. The exception is # the job cache, which must be deleted if this user is changed. If the # modified files cause conflicts, set verify_env to False. -#user: root +user: salt +syndic_user: salt # The port used by the communication interface. The ret (return) port is the # interface used for the file server, authentication, job returns, etc. diff --git a/pkg/salt-common.logrotate b/pkg/salt-common.logrotate index 3cd002308e..0d99d1b801 100644 --- a/pkg/salt-common.logrotate +++ b/pkg/salt-common.logrotate @@ -1,4 +1,5 @@ /var/log/salt/master { + su salt salt weekly missingok rotate 7 @@ -15,6 +16,7 @@ } /var/log/salt/key { + su salt salt weekly missingok rotate 7 -- 2.13.7 ++++++ salt-tmpfiles.d ++++++ # Type Path Mode UID GID Age Argument d /var/run/salt 0750 root salt d /var/run/salt/master 0750 salt salt d /var/run/salt/minion 0750 root root ++++++ strip-trailing-from-repo.uri-when-comparing-repos-in.patch ++++++
From 1b6f3f2e8b88ddfaebd5bfd1ae8258d417a9f098 Mon Sep 17 00:00:00 2001 From: Matei Albu <malbu@suse.de> Date: Fri, 15 Feb 2019 14:34:13 +0100 Subject: [PATCH] Strip trailing "/" from repo.uri when comparing repos in apktpkg.mod_repo (bsc#1146192)
(cherry picked from commit af85627) --- salt/modules/aptpkg.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/salt/modules/aptpkg.py b/salt/modules/aptpkg.py index b7c1a342ef..d49a48310e 100644 --- a/salt/modules/aptpkg.py +++ b/salt/modules/aptpkg.py @@ -2354,7 +2354,7 @@ def mod_repo(repo, saltenv='base', **kwargs): # and the resulting source line. The idea here is to ensure # we are not returning bogus data because the source line # has already been modified on a previous run. - repo_matches = source.type == repo_type and source.uri == repo_uri and source.dist == repo_dist + repo_matches = source.type == repo_type and source.uri.rstrip('/') == repo_uri.rstrip('/') and source.dist == repo_dist kw_matches = source.dist == kw_dist and source.type == kw_type if repo_matches or kw_matches: -- 2.20.1 ++++++ support-config-non-root-permission-issues-fixes-u-50.patch ++++++
From 1113909fe9ab0509ebe439051238d6a4f95d3c54 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Wed, 17 Oct 2018 14:10:47 +0200 Subject: [PATCH] Support-config non-root permission issues fixes (U#50095)
Do not crash if there is no configuration available at all Handle CLI and log errors Catch overwriting exiting archive error by other users Suppress excessive tracebacks on error log level --- salt/cli/support/collector.py | 39 ++++++++++++++++++++++++++++++++--- salt/utils/parsers.py | 2 +- 2 files changed, 37 insertions(+), 4 deletions(-) diff --git a/salt/cli/support/collector.py b/salt/cli/support/collector.py index 478d07e13b..a4343297b6 100644 --- a/salt/cli/support/collector.py +++ b/salt/cli/support/collector.py @@ -125,6 +125,31 @@ class SupportDataCollector(object): self.__current_section = [] self.__current_section_name = name + def _printout(self, data, output): + ''' + Use salt outputter to printout content. + + :return: + ''' + opts = {'extension_modules': '', 'color': False} + try: + printout = salt.output.get_printout(output, opts)(data) + if printout is not None: + return printout.rstrip() + except (KeyError, AttributeError, TypeError) as err: + log.debug(err, exc_info=True) + try: + printout = salt.output.get_printout('nested', opts)(data) + if printout is not None: + return printout.rstrip() + except (KeyError, AttributeError, TypeError) as err: + log.debug(err, exc_info=True) + printout = salt.output.get_printout('raw', opts)(data) + if printout is not None: + return printout.rstrip() + + return salt.output.try_printout(data, output, opts) + def write(self, title, data, output=None): ''' Add a data to the current opened section. @@ -138,7 +163,7 @@ class SupportDataCollector(object): try: if isinstance(data, dict) and 'return' in data: data = data['return'] - content = salt.output.try_printout(data, output, {'extension_modules': '', 'color': False}) + content = self._printout(data, output) except Exception: # Fall-back to just raw YAML content = None else: @@ -406,7 +431,11 @@ class SaltSupport(salt.utils.parsers.SaltSupportOptionParser): and self.config.get('support_archive') and os.path.exists(self.config['support_archive'])): self.out.warning('Terminated earlier, cleaning up') - os.unlink(self.config['support_archive']) + try: + os.unlink(self.config['support_archive']) + except Exception as err: + log.debug(err) + self.out.error('{} while cleaning up.'.format(err)) def _check_existing_archive(self): ''' @@ -418,7 +447,11 @@ class SaltSupport(salt.utils.parsers.SaltSupportOptionParser): if os.path.exists(self.config['support_archive']): if self.config['support_archive_force_overwrite']: self.out.warning('Overwriting existing archive: {}'.format(self.config['support_archive'])) - os.unlink(self.config['support_archive']) + try: + os.unlink(self.config['support_archive']) + except Exception as err: + log.debug(err) + self.out.error('{} while trying to overwrite existing archive.'.format(err)) ret = True else: self.out.warning('File {} already exists.'.format(self.config['support_archive'])) diff --git a/salt/utils/parsers.py b/salt/utils/parsers.py index 56a8961c3a..058346a9f4 100644 --- a/salt/utils/parsers.py +++ b/salt/utils/parsers.py @@ -1922,7 +1922,7 @@ class SaltSupportOptionParser(six.with_metaclass(OptionParserMeta, OptionParser, ''' _opts, _args = optparse.OptionParser.parse_args(self) configs = self.find_existing_configs(_opts.support_unit) - if cfg not in configs: + if configs and cfg not in configs: cfg = configs[0] return config.master_config(self.get_config_file_path(cfg)) -- 2.19.0 ++++++ support-for-btrfs-and-xfs-in-parted-and-mkfs.patch ++++++
From 80d7e7670157f9ed71773b13d9fde0841fbe6a78 Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Fri, 10 Jan 2020 17:18:14 +0100 Subject: [PATCH] Support for Btrfs and XFS in parted and mkfs
--- salt/modules/parted_partition.py | 4 ++-- tests/unit/modules/test_parted_partition.py | 16 ++++++++++++++++ 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/salt/modules/parted_partition.py b/salt/modules/parted_partition.py index c2e0ebb882..e68124c245 100644 --- a/salt/modules/parted_partition.py +++ b/salt/modules/parted_partition.py @@ -390,8 +390,8 @@ def _is_fstype(fs_type): :param fs_type: file system type :return: True if fs_type is supported in this module, False otherwise ''' - return fs_type in set(['ext2', 'ext3', 'ext4', 'fat32', 'fat16', 'linux-swap', 'reiserfs', - 'hfs', 'hfs+', 'hfsx', 'NTFS', 'ntfs', 'ufs']) + return fs_type in set(['btrfs', 'ext2', 'ext3', 'ext4', 'fat32', 'fat16', 'linux-swap', 'reiserfs', + 'hfs', 'hfs+', 'hfsx', 'NTFS', 'ntfs', 'ufs', 'xfs']) def mkfs(device, fs_type): diff --git a/tests/unit/modules/test_parted_partition.py b/tests/unit/modules/test_parted_partition.py index 1959e5978e..5d92bd6d14 100644 --- a/tests/unit/modules/test_parted_partition.py +++ b/tests/unit/modules/test_parted_partition.py @@ -377,6 +377,22 @@ class PartedTestCase(TestCase, LoaderModuleMockMixin): } self.assertEqual(output, expected) + def test_btrfs_fstypes(self): + '''Tests if we see btrfs as valid fs type''' + with patch('salt.modules.parted_partition._validate_device', MagicMock()): + try: + parted.mkfs('/dev/foo', 'btrfs') + except CommandExecutionError: + self.fail("Btrfs is not in the supported fstypes") + + def test_xfs_fstypes(self): + '''Tests if we see xfs as valid fs type''' + with patch('salt.modules.parted_partition._validate_device', MagicMock()): + try: + parted.mkfs('/dev/foo', 'xfs') + except CommandExecutionError: + self.fail("XFS is not in the supported fstypes") + def test_disk_set(self): with patch('salt.modules.parted_partition._validate_device', MagicMock()): self.cmdrun.return_value = '' -- 2.16.4 ++++++ switch-firewalld-state-to-use-change_interface.patch ++++++
From ee499612e1302b908a64dde696065b0093fe3115 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Mon, 20 May 2019 11:59:39 +0100 Subject: [PATCH] Switch firewalld state to use change_interface
firewalld.present state allows to bind interface to given zone. However if the interface is already bound to some other zone, call- ing `add_interface` will not change rebind the interface but report error. Option `change_interface` however can rebind the interface from one zone to another. This PR adds `firewalld.change_interface` call to firewalld module and updates `firewalld.present` state to use this call. --- salt/modules/firewalld.py | 23 +++++++++++++++++++++++ salt/states/firewalld.py | 4 ++-- 2 files changed, 25 insertions(+), 2 deletions(-) diff --git a/salt/modules/firewalld.py b/salt/modules/firewalld.py index 7eeb865fa7..232fe052a2 100644 --- a/salt/modules/firewalld.py +++ b/salt/modules/firewalld.py @@ -951,6 +951,29 @@ def remove_interface(zone, interface, permanent=True): return __firewall_cmd(cmd) +def change_interface(zone, interface, permanent=True): + ''' + Change zone the interface bound to + + .. versionadded:: 2019.?.? + + CLI Example: + + .. code-block:: bash + + salt '*' firewalld.change_interface zone eth0 + ''' + if interface in get_interfaces(zone, permanent): + log.info('Interface is already bound to zone.') + + cmd = '--zone={0} --change-interface={1}'.format(zone, interface) + + if permanent: + cmd += ' --permanent' + + return __firewall_cmd(cmd) + + def get_sources(zone, permanent=True): ''' List sources bound to a zone diff --git a/salt/states/firewalld.py b/salt/states/firewalld.py index 4623798658..fc5b233f98 100644 --- a/salt/states/firewalld.py +++ b/salt/states/firewalld.py @@ -647,8 +647,8 @@ def _present(name, for interface in new_interfaces: if not __opts__['test']: try: - __salt__['firewalld.add_interface'](name, interface, - permanent=True) + __salt__['firewalld.change_interface'](name, interface, + permanent=True) except CommandExecutionError as err: ret['comment'] = 'Error: {0}'.format(err) return ret -- 2.17.1 ++++++ take-checksums-arg-into-account-for-postgres.datadir.patch ++++++
From 7ed3e99a4979a13c7142ed5ba73c09a282e03147 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Thu, 26 Sep 2019 15:57:58 +0100 Subject: [PATCH] Take checksums arg into account for postgres.datadir_init (bsc#1151650)
Update unit test for postgres.datadir_init --- salt/modules/postgres.py | 1 + tests/unit/modules/test_postgres.py | 1 + 2 files changed, 2 insertions(+) diff --git a/salt/modules/postgres.py b/salt/modules/postgres.py index b6f7cbe5d4..f0d1b034b9 100644 --- a/salt/modules/postgres.py +++ b/salt/modules/postgres.py @@ -3151,6 +3151,7 @@ def datadir_init(name, password=password, encoding=encoding, locale=locale, + checksums=checksums, runas=runas) return ret['retcode'] == 0 diff --git a/tests/unit/modules/test_postgres.py b/tests/unit/modules/test_postgres.py index 03fb7fddfd..6f10fcf2e0 100644 --- a/tests/unit/modules/test_postgres.py +++ b/tests/unit/modules/test_postgres.py @@ -1467,6 +1467,7 @@ class PostgresTestCase(TestCase, LoaderModuleMockMixin): locale=None, password='test', runas='postgres', + checksums=False, user='postgres', ) self.assertTrue(ret) -- 2.22.0 ++++++ temporary-fix-extend-the-whitelist-of-allowed-comman.patch ++++++
From c9c50ab75b4a8a73f57e8c2eeaa24401409e8c3c Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Thu, 24 Jan 2019 18:12:35 +0100 Subject: [PATCH] temporary fix: extend the whitelist of allowed commands
--- salt/auth/__init__.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/salt/auth/__init__.py b/salt/auth/__init__.py index ca7168d00e..aa4c5c3670 100644 --- a/salt/auth/__init__.py +++ b/salt/auth/__init__.py @@ -46,6 +46,8 @@ AUTH_INTERNAL_KEYWORDS = frozenset([ 'gather_job_timeout', 'kwarg', 'match', + "id_", + "force", 'metadata', 'print_event', 'raw', -- 2.20.1 ++++++ travis.yml ++++++ language: python python: - '2.6' - '2.7' before_install: - sudo apt-get update - sudo apt-get install --fix-broken --ignore-missing -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" swig rabbitmq-server ruby python-apt mysql-server libmysqlclient-dev - (git describe && git fetch --tags) || (git remote add upstream git://github.com/saltstack/salt.git && git fetch --tags upstream) - pip install mock - pip install --allow-external http://dl.dropbox.com/u/174789/m2crypto-0.20.1.tar.gz - pip install --upgrade pep8 'pylint<=1.2.0' - pip install --upgrade coveralls - "if [[ $TRAVIS_PYTHON_VERSION == '2.6' ]]; then pip install unittest2 ordereddict; fi" - pip install git+https://github.com/saltstack/salt-testing.git#egg=SaltTesting install: - pip install -r requirements/zeromq.txt -r requirements/cloud.txt - pip install --allow-all-external -r requirements/opt.txt before_script: - "/home/travis/virtualenv/python${TRAVIS_PYTHON_VERSION}/bin/pylint --rcfile=.testing.pylintrc salt/ && echo 'Finished Pylint Check Cleanly' || echo 'Finished Pylint Check With Errors'" - "/home/travis/virtualenv/python${TRAVIS_PYTHON_VERSION}/bin/pep8 --ignore=E501,E12 salt/ && echo 'Finished PEP-8 Check Cleanly' || echo 'Finished PEP-8 Check With Errors'" script: "sudo -E /home/travis/virtualenv/python${TRAVIS_PYTHON_VERSION}/bin/python setup.py test --runtests-opts='--run-destructive --sysinfo -v --coverage'" after_success: - coveralls notifications: irc: channels: "irc.freenode.org#salt-devel" on_success: change on_failure: change ++++++ try-except-undefineflags-as-this-operation-is-not-su.patch ++++++
From e0bded83fa691c3b972fa4c22b14c5ac0a7a3f13 Mon Sep 17 00:00:00 2001 From: Jeroen Schutrup <jeroenschutrup@hotmail.nl> Date: Sun, 12 Aug 2018 19:43:22 +0200 Subject: [PATCH] Try/except undefineFlags() as this operation is not supported on bhyve
(cherry picked from commit 29a44aceb1a73347ac07dd241b4a64a4a38cef6e) --- salt/modules/virt.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index a3f625909d..423016cd90 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -3189,7 +3189,10 @@ def purge(vm_, dirs=False, removables=None, **kwargs): shutil.rmtree(dir_) if getattr(libvirt, 'VIR_DOMAIN_UNDEFINE_NVRAM', False): # This one is only in 1.2.8+ - dom.undefineFlags(libvirt.VIR_DOMAIN_UNDEFINE_NVRAM) + try: + dom.undefineFlags(libvirt.VIR_DOMAIN_UNDEFINE_NVRAM) + except Exception: + dom.undefine() else: dom.undefine() conn.close() -- 2.21.0 ++++++ update-documentation.sh ++++++ #!/bin/bash # # Update html.tar.bz2 documentation tarball # Author: Bo Maryniuk <bo@suse.de> # NO_SPHINX_PARAM="--without-sphinx" function build_virtenv() { virtualenv --system-site-packages $1 source $1/bin/activate pip install --upgrade pip if [ -z "$2" ]; then pip install -I Sphinx fi } function check_env() { if [[ -z "$1" || "$1" != "$NO_SPHINX_PARAM" ]] && [ ! -z "$(which sphinx-build 2>/dev/null)" ]; then cat <<EOF You've installed Spinx globally. But it might be outdated or clash with the version I am going to install into the temporary virtual environment from PIP. Please consider to remove Sphinx from your system, perhaps? Or pass me "$NO_SPHINX_PARAM" param so I will try reusing yours and see what happens. :) EOF exit 1; fi for cmd in "make" "quilt" "virtualenv" "pip"; do if [ -z "$(which $cmd 2>/dev/null)" ]; then echo "Error: '$cmd' is still missing. Install it, please." exit 1; fi done } function quilt_setup() { quilt setup salt.spec cd $1 quilt push -a } function build_docs() { cd $1 make html rm _build/html/.buildinfo cd _build/html chmod -R -x+X * cd .. tar cvf - html | bzip2 > $2/html.tar.bz2 } function write_changelog() { mv salt.changes salt.changes.previous TIME=$(date -u +'%a %b %d %T %Z %Y') MAIL=$1 SEP="-------------------------------------------------------------------" cat <<EOF > salt.changes $SEP $TIME - $MAIL - Updated html.tar.bz2 documentation tarball. EOF cat salt.changes.previous >> salt.changes rm salt.changes.previous } if [ -z "$1" ]; then echo "Usage: $0 <your e-mail> [--without-sphinx]" exit 1; fi check_env $2; START=$(pwd) V_ENV="sphinx_doc_gen" V_TMP=$(mktemp -d) for f in "salt.spec" "salt*tar.gz"; do cp -v $f $V_TMP done cd $V_TMP; build_virtenv $V_ENV $2; SRC_DIR="salt-$(cat salt.spec | grep ^Version: | cut -d: -f2 | sed -e 's/[[:blank:]]//g')"; quilt_setup $SRC_DIR build_docs doc $V_TMP cd $START mv $V_TMP/html.tar.bz2 $START rm -rf $V_TMP echo "Done" echo "---------------" ++++++ use-adler32-algorithm-to-compute-string-checksums.patch ++++++
From 9d09fcb60b8babd415af76812c93d38b6cbce661 Mon Sep 17 00:00:00 2001 From: Bo Maryniuk <bo@suse.de> Date: Sat, 28 Jul 2018 22:59:04 +0200 Subject: [PATCH] Use Adler32 algorithm to compute string checksums
Generate the same numeric value across all Python versions and platforms Re-add getting hash by Python shell-out method Add an option to choose between default hashing, Adler32 or CRC32 algorithms Set default config option for server_id hashing to False on minion Choose CRC method, default to faster but less reliable "adler32", if crc is in use Add warning for Sodium. --- salt/config/__init__.py | 7 +++++- salt/grains/core.py | 53 +++++++++++++++++++++++++++-------------- 2 files changed, 41 insertions(+), 19 deletions(-) diff --git a/salt/config/__init__.py b/salt/config/__init__.py index 6b74b90ce0..5d0c18b5d1 100644 --- a/salt/config/__init__.py +++ b/salt/config/__init__.py @@ -1212,6 +1212,10 @@ VALID_OPTS = { # Thorium top file location 'thorium_top': six.string_types, + + # Use Adler32 hashing algorithm for server_id (default False until Sodium, "adler32" after) + # Possible values are: False, adler32, crc32 + 'server_id_use_crc': (bool, six.string_types), } # default configurations @@ -1520,7 +1524,8 @@ DEFAULT_MINION_OPTS = { }, 'discovery': False, 'schedule': {}, - 'ssh_merge_pillar': True + 'ssh_merge_pillar': True, + 'server_id_use_crc': False, } DEFAULT_MASTER_OPTS = { diff --git a/salt/grains/core.py b/salt/grains/core.py index 85a929a485..378d3cb786 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -20,6 +20,7 @@ import platform import logging import locale import uuid +import zlib from errno import EACCES, EPERM import datetime import warnings @@ -61,6 +62,7 @@ import salt.utils.path import salt.utils.pkg.rpm import salt.utils.platform import salt.utils.stringutils +import salt.utils.versions from salt.ext import six from salt.ext.six.moves import range @@ -2730,40 +2732,55 @@ def _hw_data(osdata): return grains -def get_server_id(): +def _get_hash_by_shell(): ''' - Provides an integer based on the FQDN of a machine. - Useful as server-id in MySQL replication or anywhere else you'll need an ID - like this. + Shell-out Python 3 for compute reliable hash + :return: ''' - # Provides: - # server_id - - if salt.utils.platform.is_proxy(): - return {} id_ = __opts__.get('id', '') id_hash = None py_ver = sys.version_info[:2] if py_ver >= (3, 3): # Python 3.3 enabled hash randomization, so we need to shell out to get # a reliable hash. - id_hash = __salt__['cmd.run']( - [sys.executable, '-c', 'print(hash("{0}"))'.format(id_)], - env={'PYTHONHASHSEED': '0'} - ) + id_hash = __salt__['cmd.run']([sys.executable, '-c', 'print(hash("{0}"))'.format(id_)], + env={'PYTHONHASHSEED': '0'}) try: id_hash = int(id_hash) except (TypeError, ValueError): - log.debug( - 'Failed to hash the ID to get the server_id grain. Result of ' - 'hash command: %s', id_hash - ) + log.debug('Failed to hash the ID to get the server_id grain. Result of hash command: %s', id_hash) id_hash = None if id_hash is None: # Python < 3.3 or error encountered above id_hash = hash(id_) - return {'server_id': abs(id_hash % (2 ** 31))} + return abs(id_hash % (2 ** 31)) + + +def get_server_id(): + ''' + Provides an integer based on the FQDN of a machine. + Useful as server-id in MySQL replication or anywhere else you'll need an ID + like this. + ''' + # Provides: + # server_id + + if salt.utils.platform.is_proxy(): + server_id = {} + else: + use_crc = __opts__.get('server_id_use_crc') + if bool(use_crc): + id_hash = getattr(zlib, use_crc, zlib.adler32)(__opts__.get('id', '').encode()) & 0xffffffff + else: + salt.utils.versions.warn_until('Sodium', 'This server_id is computed nor by Adler32 neither by CRC32. ' + 'Please use "server_id_use_crc" option and define algorithm you' + 'prefer (default "Adler32"). The server_id will be computed with' + 'Adler32 by default.') + id_hash = _get_hash_by_shell() + server_id = {'server_id': id_hash} + + return server_id def get_master(): -- 2.20.1 ++++++ use-current-ioloop-for-the-localclient-instance-of-b.patch ++++++
From 55d8a777d6a9b19c959e14a4060e5579e92cd106 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Thu, 3 Oct 2019 15:19:02 +0100 Subject: [PATCH] Use current IOLoop for the LocalClient instance of BatchAsync (bsc#1137642)
--- salt/cli/batch_async.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/salt/cli/batch_async.py b/salt/cli/batch_async.py index 2bb50459c8..f9e736f804 100644 --- a/salt/cli/batch_async.py +++ b/salt/cli/batch_async.py @@ -52,7 +52,7 @@ class BatchAsync(object): ''' def __init__(self, parent_opts, jid_gen, clear_load): ioloop = tornado.ioloop.IOLoop.current() - self.local = salt.client.get_local_client(parent_opts['conf_file']) + self.local = salt.client.get_local_client(parent_opts['conf_file'], io_loop=ioloop) if 'gather_job_timeout' in clear_load['kwargs']: clear_load['gather_job_timeout'] = clear_load['kwargs'].pop('gather_job_timeout') else: @@ -266,6 +266,7 @@ class BatchAsync(object): yield def __del__(self): + self.local = None self.event = None self.ioloop = None gc.collect() -- 2.22.0 ++++++ use-threadpool-from-multiprocessing.pool-to-avoid-le.patch ++++++
From cd8e175738f7742fbb7c9e9d329039371bc0e579 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Pablo=20Su=C3=A1rez=20Hern=C3=A1ndez?= <psuarezhernandez@suse.com> Date: Tue, 30 Apr 2019 10:51:42 +0100 Subject: [PATCH] Use ThreadPool from multiprocessing.pool to avoid leakings
--- salt/grains/core.py | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/salt/grains/core.py b/salt/grains/core.py index 796458939d..fec7b204bc 100644 --- a/salt/grains/core.py +++ b/salt/grains/core.py @@ -26,7 +26,7 @@ from errno import EACCES, EPERM import datetime import warnings -from multiprocessing.dummy import Pool as ThreadPool +from multiprocessing.pool import ThreadPool # pylint: disable=import-error try: @@ -2225,10 +2225,14 @@ def fqdns(): # Create a ThreadPool to process the underlying calls to 'socket.gethostbyaddr' in parallel. # This avoid blocking the execution when the "fqdn" is not defined for certains IP addresses, which was causing # that "socket.timeout" was reached multiple times secuencially, blocking execution for several seconds. - pool = ThreadPool(8) - results = pool.map(_lookup_fqdn, addresses) - pool.close() - pool.join() + + try: + pool = ThreadPool(8) + results = pool.map(_lookup_fqdn, addresses) + pool.close() + pool.join() + except Exception as exc: + log.error("Exception while creating a ThreadPool for resolving FQDNs: %s", exc) for item in results: if item: -- 2.17.1 ++++++ various-netapi-fixes-and-tests.patch ++++++
From 95f38ddf067b9c52654395a217afea988e44a54f Mon Sep 17 00:00:00 2001 From: Jochen Breuer <jbreuer@suse.de> Date: Wed, 19 Feb 2020 14:37:05 +0100 Subject: [PATCH] various netapi fixes and tests
--- conf/master | 6 ++ salt/config/__init__.py | 6 +- salt/netapi/__init__.py | 7 +- tests/integration/netapi/test_client.py | 150 +++++++++++++++++++++++++++++++- tests/support/helpers.py | 19 ++++ 5 files changed, 185 insertions(+), 3 deletions(-) diff --git a/conf/master b/conf/master index 06bed3ea44..349d971414 100644 --- a/conf/master +++ b/conf/master @@ -1291,3 +1291,9 @@ syndic_user: salt # use OS defaults, typically 75 seconds on Linux, see # /proc/sys/net/ipv4/tcp_keepalive_intvl. #tcp_keepalive_intvl: -1 + + +##### NetAPI settings ##### +############################################ +# Allow the raw_shell parameter to be used when calling Salt SSH client via API +#netapi_allow_raw_shell: True diff --git a/salt/config/__init__.py b/salt/config/__init__.py index 5d0c18b5d1..dc257ff8b8 100644 --- a/salt/config/__init__.py +++ b/salt/config/__init__.py @@ -1216,6 +1216,10 @@ VALID_OPTS = { # Use Adler32 hashing algorithm for server_id (default False until Sodium, "adler32" after) # Possible values are: False, adler32, crc32 'server_id_use_crc': (bool, six.string_types), + + # Allow raw_shell option when using the ssh + # client via the Salt API + 'netapi_allow_raw_shell': bool, } # default configurations @@ -1869,9 +1873,9 @@ DEFAULT_MASTER_OPTS = { 'auth_events': True, 'minion_data_cache_events': True, 'enable_ssh_minions': False, + 'netapi_allow_raw_shell': False, } - # ----- Salt Proxy Minion Configuration Defaults -----------------------------------> # These are merged with DEFAULT_MINION_OPTS since many of them also apply here. DEFAULT_PROXY_MINION_OPTS = { diff --git a/salt/netapi/__init__.py b/salt/netapi/__init__.py index 43b6e943a7..31a24bb420 100644 --- a/salt/netapi/__init__.py +++ b/salt/netapi/__init__.py @@ -71,10 +71,15 @@ class NetapiClient(object): raise salt.exceptions.SaltInvocationError( 'Invalid client specified: \'{0}\''.format(low.get('client'))) - if not ('token' in low or 'eauth' in low) and low['client'] != 'ssh': + if not ('token' in low or 'eauth' in low): raise salt.exceptions.EauthAuthenticationError( 'No authentication credentials given') + if low.get('raw_shell') and \ + not self.opts.get('netapi_allow_raw_shell'): + raise salt.exceptions.EauthAuthenticationError( + 'Raw shell option not allowed.') + l_fun = getattr(self, low['client']) f_call = salt.utils.args.format_call(l_fun, low) return l_fun(*f_call.get('args', ()), **f_call.get('kwargs', {})) diff --git a/tests/integration/netapi/test_client.py b/tests/integration/netapi/test_client.py index 503bbaf335..a886563e3d 100644 --- a/tests/integration/netapi/test_client.py +++ b/tests/integration/netapi/test_client.py @@ -2,17 +2,32 @@ # Import Python libs from __future__ import absolute_import, print_function, unicode_literals +import logging import os import time # Import Salt Testing libs -from tests.support.paths import TMP_CONF_DIR +from tests.support.paths import TMP_CONF_DIR, TMP +from tests.support.runtests import RUNTIME_VARS from tests.support.unit import TestCase, skipIf +from tests.support.mock import patch +from tests.support.case import SSHCase +from tests.support.helpers import ( + Webserver, + SaveRequestsPostHandler, + requires_sshd_server +) # Import Salt libs import salt.config import salt.netapi +from salt.exceptions import ( + EauthAuthenticationError +) + +log = logging.getLogger(__name__) + class NetapiClientTest(TestCase): eauth_creds = { @@ -74,6 +89,12 @@ class NetapiClientTest(TestCase): pass self.assertEqual(ret, {'minions': sorted(['minion', 'sub_minion'])}) + def test_local_unauthenticated(self): + low = {'client': 'local', 'tgt': '*', 'fun': 'test.ping'} + + with self.assertRaises(EauthAuthenticationError) as excinfo: + ret = self.netapi.run(low) + def test_wheel(self): low = {'client': 'wheel', 'fun': 'key.list_all'} low.update(self.eauth_creds) @@ -107,6 +128,12 @@ class NetapiClientTest(TestCase): self.assertIn('jid', ret) self.assertIn('tag', ret) + def test_wheel_unauthenticated(self): + low = {'client': 'wheel', 'tgt': '*', 'fun': 'test.ping'} + + with self.assertRaises(EauthAuthenticationError) as excinfo: + ret = self.netapi.run(low) + @skipIf(True, 'This is not testing anything. Skipping for now.') def test_runner(self): # TODO: fix race condition in init of event-- right now the event class @@ -125,3 +152,124 @@ class NetapiClientTest(TestCase): low.update(self.eauth_creds) ret = self.netapi.run(low) + + def test_runner_unauthenticated(self): + low = {'client': 'runner', 'tgt': '*', 'fun': 'test.ping'} + + with self.assertRaises(EauthAuthenticationError) as excinfo: + ret = self.netapi.run(low) + + +@requires_sshd_server +class NetapiSSHClientTest(SSHCase): + eauth_creds = { + 'username': 'saltdev_auto', + 'password': 'saltdev', + 'eauth': 'auto', + } + + def setUp(self): + ''' + Set up a NetapiClient instance + ''' + opts = salt.config.client_config(os.path.join(TMP_CONF_DIR, 'master')) + self.netapi = salt.netapi.NetapiClient(opts) + self.priv_file = os.path.join(RUNTIME_VARS.TMP_CONF_DIR, 'key_test') + self.rosters = os.path.join(RUNTIME_VARS.TMP_CONF_DIR) + + self.priv_file = os.path.join(RUNTIME_VARS.TMP_CONF_DIR, 'key_test') + self.rosters = os.path.join(RUNTIME_VARS.TMP_CONF_DIR) + + # Initialize salt-ssh + self.run_function('test.ping') + + def tearDown(self): + del self.netapi + + @classmethod + def setUpClass(cls): + cls.post_webserver = Webserver(handler=SaveRequestsPostHandler) + cls.post_webserver.start() + cls.post_web_root = cls.post_webserver.web_root + cls.post_web_handler = cls.post_webserver.handler + + @classmethod + def tearDownClass(cls): + cls.post_webserver.stop() + del cls.post_webserver + + def test_ssh(self): + low = {'client': 'ssh', + 'tgt': 'localhost', + 'fun': 'test.ping', + 'ignore_host_keys': True, + 'roster_file': 'roster', + 'rosters': [self.rosters], + 'ssh_priv': self.priv_file} + + low.update(self.eauth_creds) + + ret = self.netapi.run(low) + + self.assertIn('localhost', ret) + self.assertIn('return', ret['localhost']) + self.assertEqual(ret['localhost']['return'], True) + self.assertEqual(ret['localhost']['id'], 'localhost') + self.assertEqual(ret['localhost']['fun'], 'test.ping') + + def test_ssh_unauthenticated(self): + low = {'client': 'ssh', 'tgt': 'localhost', 'fun': 'test.ping'} + + with self.assertRaises(EauthAuthenticationError) as excinfo: + ret = self.netapi.run(low) + + def test_ssh_unauthenticated_raw_shell_curl(self): + + fun = '-o ProxyCommand curl {0}'.format(self.post_web_root) + low = {'client': 'ssh', + 'tgt': 'localhost', + 'fun': fun, + 'raw_shell': True} + + ret = None + with self.assertRaises(EauthAuthenticationError) as excinfo: + ret = self.netapi.run(low) + + self.assertEqual(self.post_web_handler.received_requests, []) + self.assertEqual(ret, None) + + def test_ssh_unauthenticated_raw_shell_touch(self): + + badfile = os.path.join(TMP, 'badfile.txt') + fun = '-o ProxyCommand touch {0}'.format(badfile) + low = {'client': 'ssh', + 'tgt': 'localhost', + 'fun': fun, + 'raw_shell': True} + + ret = None + with self.assertRaises(EauthAuthenticationError) as excinfo: + ret = self.netapi.run(low) + + self.assertEqual(ret, None) + self.assertFalse(os.path.exists('badfile.txt')) + + def test_ssh_authenticated_raw_shell_disabled(self): + + badfile = os.path.join(TMP, 'badfile.txt') + fun = '-o ProxyCommand touch {0}'.format(badfile) + low = {'client': 'ssh', + 'tgt': 'localhost', + 'fun': fun, + 'raw_shell': True} + + low.update(self.eauth_creds) + + ret = None + with patch.dict(self.netapi.opts, + {'netapi_allow_raw_shell': False}): + with self.assertRaises(EauthAuthenticationError) as excinfo: + ret = self.netapi.run(low) + + self.assertEqual(ret, None) + self.assertFalse(os.path.exists('badfile.txt')) diff --git a/tests/support/helpers.py b/tests/support/helpers.py index 626da6a069..e5ca5918c9 100644 --- a/tests/support/helpers.py +++ b/tests/support/helpers.py @@ -1582,6 +1582,25 @@ class Webserver(object): self.server_thread.join() +class SaveRequestsPostHandler(tornado.web.RequestHandler): + ''' + Save all requests sent to the server. + ''' + received_requests = [] + + def post(self, *args): # pylint: disable=arguments-differ + ''' + Handle the post + ''' + self.received_requests.append(self.request) + + def data_received(self): # pylint: disable=arguments-differ + ''' + Streaming not used for testing + ''' + raise NotImplementedError() + + def win32_kill_process_tree(pid, sig=signal.SIGTERM, include_parent=True, timeout=None, on_terminate=None): ''' -- 2.16.4 ++++++ virt-1.volume_infos-fix-for-single-vm.patch ++++++
From 9fcf9a768d0f11e04e145612cc5b2c05cfbf5378 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat@suse.com> Date: Thu, 4 Apr 2019 16:18:58 +0200 Subject: [PATCH] virt.volume_infos fix for single VM
virt.volume_infos: don't raise an error if there is no VM --- salt/modules/virt.py | 8 ++++-- tests/unit/modules/test_virt.py | 46 +++++++++++++++++++++++++++++++++ 2 files changed, 52 insertions(+), 2 deletions(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index d160f0905f..953064cc2c 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -5050,8 +5050,12 @@ def volume_infos(pool=None, volume=None, **kwargs): conn = __get_conn(**kwargs) try: backing_stores = _get_all_volumes_paths(conn) - domains = _get_domain(conn) - domains_list = domains if isinstance(domains, list) else [domains] + try: + domains = _get_domain(conn) + domains_list = domains if isinstance(domains, list) else [domains] + except CommandExecutionError: + # Having no VM is not an error here. + domains_list = [] disks = {domain.name(): {node.get('file') for node in ElementTree.fromstring(domain.XMLDesc(0)).findall('.//disk/source/[@file]')} diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index cc62b67918..b343b9bc31 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -2910,6 +2910,52 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): } }) + # No VM test + with patch('salt.modules.virt._get_domain', MagicMock(side_effect=CommandExecutionError('no VM'))): + actual = virt.volume_infos('pool0', 'vol0') + self.assertEqual(1, len(actual.keys())) + self.assertEqual(1, len(actual['pool0'].keys())) + self.assertEqual([], sorted(actual['pool0']['vol0']['used_by'])) + self.assertEqual('/path/to/vol0.qcow2', actual['pool0']['vol0']['path']) + self.assertEqual('file', actual['pool0']['vol0']['type']) + self.assertEqual('/key/of/vol0', actual['pool0']['vol0']['key']) + self.assertEqual(123456789, actual['pool0']['vol0']['capacity']) + self.assertEqual(123456, actual['pool0']['vol0']['allocation']) + + self.assertEqual(virt.volume_infos('pool1', None), { + 'pool1': { + 'vol1': { + 'type': 'file', + 'key': '/key/of/vol1', + 'path': '/path/to/vol1.qcow2', + 'capacity': 12345, + 'allocation': 1234, + 'used_by': [], + }, + 'vol2': { + 'type': 'file', + 'key': '/key/of/vol2', + 'path': '/path/to/vol2.qcow2', + 'capacity': 12345, + 'allocation': 1234, + 'used_by': [], + } + } + }) + + self.assertEqual(virt.volume_infos(None, 'vol2'), { + 'pool1': { + 'vol2': { + 'type': 'file', + 'key': '/key/of/vol2', + 'path': '/path/to/vol2.qcow2', + 'capacity': 12345, + 'allocation': 1234, + 'used_by': [], + } + } + }) + def test_volume_delete(self): ''' Test virt.volume_delete -- 2.21.0 ++++++ virt-adding-kernel-boot-parameters-to-libvirt-xml-55.patch ++++++
From ce8d431fad81547d1e76f264a172d27e2b4fccb1 Mon Sep 17 00:00:00 2001 From: Larry Dewey <ldewey@suse.com> Date: Tue, 7 Jan 2020 02:48:11 -0700 Subject: [PATCH] virt: adding kernel boot parameters to libvirt xml #55245 (#197)
* virt: adding kernel boot parameters to libvirt xml SUSE's autoyast and Red Hat's kickstart take advantage of kernel paths, initrd paths, and kernel boot command line parameters. These changes provide the option of using these, and will allow salt and autoyast/kickstart to work together. Signed-off-by: Larry Dewey <ldewey@suse.com> * virt: Download linux and initrd Signed-off-by: Larry Dewey <ldewey@suse.com> --- salt/modules/virt.py | 129 ++++++++++++++++++++++- salt/states/virt.py | 29 ++++- salt/templates/virt/libvirt_domain.jinja | 12 ++- salt/utils/virt.py | 45 +++++++- tests/unit/modules/test_virt.py | 79 +++++++++++++- tests/unit/states/test_virt.py | 19 +++- 6 files changed, 302 insertions(+), 11 deletions(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index dedcf8cb6f..0f62856f5c 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -106,6 +106,8 @@ import salt.utils.templates import salt.utils.validate.net import salt.utils.versions import salt.utils.yaml + +from salt.utils.virt import check_remote, download_remote from salt.exceptions import CommandExecutionError, SaltInvocationError from salt.ext import six from salt.ext.six.moves import range # pylint: disable=import-error,redefined-builtin @@ -119,6 +121,8 @@ JINJA = jinja2.Environment( ) ) +CACHE_DIR = '/var/lib/libvirt/saltinst' + VIRT_STATE_NAME_MAP = {0: 'running', 1: 'running', 2: 'running', @@ -532,6 +536,7 @@ def _gen_xml(name, os_type, arch, graphics=None, + boot=None, **kwargs): ''' Generate the XML string to define a libvirt VM @@ -568,11 +573,15 @@ def _gen_xml(name, else: context['boot_dev'] = ['hd'] + context['boot'] = boot if boot else {} + if os_type == 'xen': # Compute the Xen PV boot method if __grains__['os_family'] == 'Suse': - context['kernel'] = '/usr/lib/grub2/x86_64-xen/grub.xen' - context['boot_dev'] = [] + if not boot or not boot.get('kernel', None): + context['boot']['kernel'] = \ + '/usr/lib/grub2/x86_64-xen/grub.xen' + context['boot_dev'] = [] if 'serial_type' in kwargs: context['serial_type'] = kwargs['serial_type'] @@ -1115,6 +1124,34 @@ def _get_merged_nics(hypervisor, profile, interfaces=None, dmac=None): return nicp +def _handle_remote_boot_params(orig_boot): + """ + Checks if the boot parameters contain a remote path. If so, it will copy + the parameters, download the files specified in the remote path, and return + a new dictionary with updated paths containing the canonical path to the + kernel and/or initrd + + :param orig_boot: The original boot parameters passed to the init or update + functions. + """ + saltinst_dir = None + new_boot = orig_boot.copy() + + try: + for key in ['kernel', 'initrd']: + if check_remote(orig_boot.get(key)): + if saltinst_dir is None: + os.makedirs(CACHE_DIR) + saltinst_dir = CACHE_DIR + + new_boot[key] = download_remote(orig_boot.get(key), + saltinst_dir) + + return new_boot + except Exception as err: + raise err + + def init(name, cpu, mem, @@ -1136,6 +1173,7 @@ def init(name, graphics=None, os_type=None, arch=None, + boot=None, **kwargs): ''' Initialize a new vm @@ -1266,6 +1304,22 @@ def init(name, :param password: password to connect with, overriding defaults .. versionadded:: 2019.2.0 + :param boot: + Specifies kernel for the virtual machine, as well as boot parameters + for the virtual machine. This is an optionl parameter, and all of the + keys are optional within the dictionary. If a remote path is provided + to kernel or initrd, salt will handle the downloading of the specified + remote fild, and will modify the XML accordingly. + + .. code-block:: python + + { + 'kernel': '/root/f8-i386-vmlinuz', + 'initrd': '/root/f8-i386-initrd', + 'cmdline': 'console=ttyS0 ks=http://example.com/f8-i386/os/' + } + + .. versionadded:: neon .. _init-nic-def: @@ -1513,7 +1567,11 @@ def init(name, if arch is None: arch = 'x86_64' if 'x86_64' in arches else arches[0] - vm_xml = _gen_xml(name, cpu, mem, diskp, nicp, hypervisor, os_type, arch, graphics, **kwargs) + if boot is not None: + boot = _handle_remote_boot_params(boot) + + vm_xml = _gen_xml(name, cpu, mem, diskp, nicp, hypervisor, os_type, arch, + graphics, boot, **kwargs) conn = __get_conn(**kwargs) try: conn.defineXML(vm_xml) @@ -1692,6 +1750,7 @@ def update(name, interfaces=None, graphics=None, live=True, + boot=None, **kwargs): ''' Update the definition of an existing domain. @@ -1727,6 +1786,23 @@ def update(name, :param username: username to connect with, overriding defaults :param password: password to connect with, overriding defaults + :param boot: + Specifies kernel for the virtual machine, as well as boot parameters + for the virtual machine. This is an optionl parameter, and all of the + keys are optional within the dictionary. If a remote path is provided + to kernel or initrd, salt will handle the downloading of the specified + remote fild, and will modify the XML accordingly. + + .. code-block:: python + + { + 'kernel': '/root/f8-i386-vmlinuz', + 'initrd': '/root/f8-i386-initrd', + 'cmdline': 'console=ttyS0 ks=http://example.com/f8-i386/os/' + } + + .. versionadded:: neon + :return: Returns a dictionary indicating the status of what has been done. It is structured in @@ -1767,6 +1843,10 @@ def update(name, # Compute the XML to get the disks, interfaces and graphics hypervisor = desc.get('type') all_disks = _disk_profile(disk_profile, hypervisor, disks, name, **kwargs) + + if boot is not None: + boot = _handle_remote_boot_params(boot) + new_desc = ElementTree.fromstring(_gen_xml(name, cpu, mem, @@ -1776,6 +1856,7 @@ def update(name, domain.OSType(), desc.find('.//os/type').get('arch'), graphics, + boot, **kwargs)) # Update the cpu @@ -1785,6 +1866,48 @@ def update(name, cpu_node.set('current', six.text_type(cpu)) need_update = True + # Update the kernel boot parameters + boot_tags = ['kernel', 'initrd', 'cmdline'] + parent_tag = desc.find('os') + + # We need to search for each possible subelement, and update it. + for tag in boot_tags: + # The Existing Tag... + found_tag = desc.find(tag) + + # The new value + boot_tag_value = boot.get(tag, None) if boot else None + + # Existing tag is found and values don't match + if found_tag and found_tag.text != boot_tag_value: + + # If the existing tag is found, but the new value is None + # remove it. If the existing tag is found, and the new value + # doesn't match update it. In either case, mark for update. + if boot_tag_value is None \ + and boot is not None \ + and parent_tag is not None: + ElementTree.remove(parent_tag, tag) + else: + found_tag.text = boot_tag_value + + need_update = True + + # Existing tag is not found, but value is not None + elif found_tag is None and boot_tag_value is not None: + + # Need to check for parent tag, and add it if it does not exist. + # Add a subelement and set the value to the new value, and then + # mark for update. + if parent_tag is not None: + child_tag = ElementTree.SubElement(parent_tag, tag) + else: + new_parent_tag = ElementTree.Element('os') + child_tag = ElementTree.SubElement(new_parent_tag, tag) + + child_tag.text = boot_tag_value + need_update = True + # Update the memory, note that libvirt outputs all memory sizes in KiB for mem_node_name in ['memory', 'currentMemory']: mem_node = desc.find(mem_node_name) diff --git a/salt/states/virt.py b/salt/states/virt.py index 68e9ac6fb6..c700cae849 100644 --- a/salt/states/virt.py +++ b/salt/states/virt.py @@ -264,7 +264,8 @@ def running(name, username=None, password=None, os_type=None, - arch=None): + arch=None, + boot=None): ''' Starts an existing guest, or defines and starts a new VM with specified arguments. @@ -349,6 +350,23 @@ def running(name, .. versionadded:: Neon + :param boot: + Specifies kernel for the virtual machine, as well as boot parameters + for the virtual machine. This is an optionl parameter, and all of the + keys are optional within the dictionary. If a remote path is provided + to kernel or initrd, salt will handle the downloading of the specified + remote fild, and will modify the XML accordingly. + + .. code-block:: python + + { + 'kernel': '/root/f8-i386-vmlinuz', + 'initrd': '/root/f8-i386-initrd', + 'cmdline': 'console=ttyS0 ks=http://example.com/f8-i386/os/' + } + + .. versionadded:: neon + .. rubric:: Example States Make sure an already-defined virtual machine called ``domain_name`` is running: @@ -413,7 +431,8 @@ def running(name, live=False, connection=connection, username=username, - password=password) + password=password, + boot=boot) if status['definition']: action_msg = 'updated and started' __salt__['virt.start'](name) @@ -431,7 +450,8 @@ def running(name, graphics=graphics, connection=connection, username=username, - password=password) + password=password, + boot=boot) ret['changes'][name] = status if status.get('errors', None): ret['comment'] = 'Domain {0} updated, but some live update(s) failed'.format(name) @@ -466,7 +486,8 @@ def running(name, priv_key=priv_key, connection=connection, username=username, - password=password) + password=password, + boot=boot) ret['changes'][name] = 'Domain defined and started' ret['comment'] = 'Domain {0} defined and started'.format(name) except libvirt.libvirtError as err: diff --git a/salt/templates/virt/libvirt_domain.jinja b/salt/templates/virt/libvirt_domain.jinja index 0b4c3fc2d6..fdaea168f2 100644 --- a/salt/templates/virt/libvirt_domain.jinja +++ b/salt/templates/virt/libvirt_domain.jinja @@ -5,7 +5,17 @@ <currentMemory unit='KiB'>{{ mem }}</currentMemory> <os> <type arch='{{ arch }}'>{{ os_type }}</type> - {% if kernel %}<kernel>{{ kernel }}</kernel>{% endif %} + {% if boot %} + {% if 'kernel' in boot %} + <kernel>{{ boot.kernel }}</kernel> + {% endif %} + {% if 'initrd' in boot %} + <initrd>{{ boot.initrd }}</initrd> + {% endif %} + {% if 'cmdline' in boot %} + <cmdline>{{ boot.cmdline }}</cmdline> + {% endif %} + {% endif %} {% for dev in boot_dev %} <boot dev='{{ dev }}' /> {% endfor %} diff --git a/salt/utils/virt.py b/salt/utils/virt.py index 9dad849c0e..b36adba81c 100644 --- a/salt/utils/virt.py +++ b/salt/utils/virt.py @@ -6,16 +6,59 @@ from __future__ import absolute_import, print_function, unicode_literals # Import python libs import os +import re import time import logging +import hashlib + +# pylint: disable=E0611 +from salt.ext.six.moves.urllib.parse import urlparse +from salt.ext.six.moves.urllib import request # Import salt libs import salt.utils.files - log = logging.getLogger(__name__) +def download_remote(url, dir): + """ + Attempts to download a file specified by 'url' + + :param url: The full remote path of the file which should be downloaded. + :param dir: The path the file should be downloaded to. + """ + + try: + rand = hashlib.md5(os.urandom(32)).hexdigest() + remote_filename = urlparse(url).path.split('/')[-1] + full_directory = \ + os.path.join(dir, "{}-{}".format(rand, remote_filename)) + with salt.utils.files.fopen(full_directory, 'wb') as file,\ + request.urlopen(url) as response: + file.write(response.rease()) + + return full_directory + + except Exception as err: + raise err + + +def check_remote(cmdline_path): + """ + Checks to see if the path provided contains ftp, http, or https. Returns + the full path if it is found. + + :param cmdline_path: The path to the initrd image or the kernel + """ + regex = re.compile('^(ht|f)tps?\\b') + + if regex.match(urlparse(cmdline_path).scheme): + return True + + return False + + class VirtKey(object): ''' Used to manage key signing requests. diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index 6f594a8ff3..4bdb933a2d 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -10,6 +10,7 @@ from __future__ import absolute_import, print_function, unicode_literals import os import re import datetime +import shutil # Import Salt Testing libs from tests.support.mixins import LoaderModuleMockMixin @@ -23,6 +24,7 @@ import salt.modules.config as config from salt._compat import ElementTree as ET import salt.config import salt.syspaths +import tempfile from salt.exceptions import CommandExecutionError # Import third party libs @@ -30,7 +32,6 @@ from salt.ext import six # pylint: disable=import-error from salt.ext.six.moves import range # pylint: disable=redefined-builtin - # pylint: disable=invalid-name,protected-access,attribute-defined-outside-init,too-many-public-methods,unused-argument @@ -610,6 +611,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): 'xen', 'xen', 'x86_64', + boot=None ) root = ET.fromstring(xml_data) self.assertEqual(root.attrib['type'], 'xen') @@ -1123,6 +1125,67 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): self.assertFalse('<interface' in definition) self.assertFalse('<disk' in definition) + # Ensure the init() function allows creating VM without NIC and + # disk but with boot parameters. + + defineMock.reset_mock() + mock_run.reset_mock() + boot = { + 'kernel': '/root/f8-i386-vmlinuz', + 'initrd': '/root/f8-i386-initrd', + 'cmdline': + 'console=ttyS0 ks=http://example.com/f8-i386/os/' + } + retval = virt.init('test vm boot params', + 2, + 1234, + nic=None, + disk=None, + seed=False, + start=False, + boot=boot) + definition = defineMock.call_args_list[0][0][0] + self.assertEqual('<kernel' in definition, True) + self.assertEqual('<initrd' in definition, True) + self.assertEqual('<cmdline' in definition, True) + self.assertEqual(retval, True) + + # Verify that remote paths are downloaded and the xml has been + # modified + mock_response = MagicMock() + mock_response.read = MagicMock(return_value='filecontent') + cache_dir = tempfile.mkdtemp() + + with patch.dict(virt.__dict__, {'CACHE_DIR': cache_dir}): + with patch('salt.ext.six.moves.urllib.request.urlopen', + MagicMock(return_value=mock_response)): + with patch('salt.utils.files.fopen', + return_value=mock_response): + + defineMock.reset_mock() + mock_run.reset_mock() + boot = { + 'kernel': + 'https://www.example.com/download/vmlinuz', + 'initrd': '', + 'cmdline': + 'console=ttyS0 ' + 'ks=http://example.com/f8-i386/os/' + } + + retval = virt.init('test remote vm boot params', + 2, + 1234, + nic=None, + disk=None, + seed=False, + start=False, + boot=boot) + definition = defineMock.call_args_list[0][0][0] + self.assertEqual(cache_dir in definition, True) + + shutil.rmtree(cache_dir) + # Test case creating disks defineMock.reset_mock() mock_run.reset_mock() @@ -1222,6 +1285,20 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): self.assertEqual(setxml.find('vcpu').text, '2') self.assertEqual(setvcpus_mock.call_args[0][0], 2) + boot = { + 'kernel': '/root/f8-i386-vmlinuz', + 'initrd': '/root/f8-i386-initrd', + 'cmdline': + 'console=ttyS0 ks=http://example.com/f8-i386/os/' + } + + # Update with boot parameter case + self.assertEqual({ + 'definition': True, + 'disk': {'attached': [], 'detached': []}, + 'interface': {'attached': [], 'detached': []} + }, virt.update('my vm', boot=boot)) + # Update memory case setmem_mock = MagicMock(return_value=0) domain_mock.setMemoryFlags = setmem_mock diff --git a/tests/unit/states/test_virt.py b/tests/unit/states/test_virt.py index 2af5caca1b..109faf5fba 100644 --- a/tests/unit/states/test_virt.py +++ b/tests/unit/states/test_virt.py @@ -249,7 +249,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): mem=2048, image='/path/to/img.qcow2'), ret) init_mock.assert_called_with('myvm', cpu=2, mem=2048, image='/path/to/img.qcow2', - os_type=None, arch=None, + os_type=None, arch=None, boot=None, disk=None, disks=None, nic=None, interfaces=None, graphics=None, hypervisor=None, seed=True, install=True, pub_key=None, priv_key=None, @@ -314,6 +314,7 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): graphics=graphics, hypervisor='qemu', seed=False, + boot=None, install=False, pub_key='/path/to/key.pub', priv_key='/path/to/key', @@ -338,6 +339,22 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): 'comment': 'Domain myvm updated, restart to fully apply the changes'}) self.assertDictEqual(virt.running('myvm', update=True, cpu=2), ret) + # Working update case when running with boot params + boot = { + 'kernel': '/root/f8-i386-vmlinuz', + 'initrd': '/root/f8-i386-initrd', + 'cmdline': 'console=ttyS0 ks=http://example.com/f8-i386/os/' + } + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.vm_state': MagicMock(return_value={'myvm': 'running'}), + 'virt.update': MagicMock(return_value={'definition': True, 'cpu': True}) + }): + ret.update({'changes': {'myvm': {'definition': True, 'cpu': True}}, + 'result': True, + 'comment': 'Domain myvm updated, restart to fully apply the changes'}) + self.assertDictEqual(virt.running('myvm', update=True, boot=boot), ret) + # Working update case when stopped with patch.dict(virt.__salt__, { # pylint: disable=no-member 'virt.vm_state': MagicMock(return_value={'myvm': 'stopped'}), -- 2.23.0 ++++++ virt-handle-whitespaces-in-vm-names.patch ++++++
From fbad82a38b4460260726cb3b9456cad7986eb4cd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat@suse.com> Date: Wed, 13 Mar 2019 09:43:51 +0100 Subject: [PATCH] virt: handle whitespaces in VM names
The disk creation code is now ready to handle whitespaces in virtual machine name. --- salt/modules/virt.py | 8 +++--- tests/unit/modules/test_virt.py | 46 ++++++++++++++++----------------- 2 files changed, 27 insertions(+), 27 deletions(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index 423016cd90..d160f0905f 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -760,14 +760,14 @@ def _qemu_image_create(disk, create_overlay=False, saltenv='base'): qcow2 = False if salt.utils.path.which('qemu-img'): - res = __salt__['cmd.run']('qemu-img info {}'.format(sfn)) + res = __salt__['cmd.run']('qemu-img info "{}"'.format(sfn)) imageinfo = salt.utils.yaml.safe_load(res) qcow2 = imageinfo['file format'] == 'qcow2' try: if create_overlay and qcow2: log.info('Cloning qcow2 image %s using copy on write', sfn) __salt__['cmd.run']( - 'qemu-img create -f qcow2 -o backing_file={0} {1}' + 'qemu-img create -f qcow2 -o backing_file="{0}" "{1}"' .format(sfn, img_dest).split()) else: log.debug('Copying %s to %s', sfn, img_dest) @@ -778,7 +778,7 @@ def _qemu_image_create(disk, create_overlay=False, saltenv='base'): if disk_size and qcow2: log.debug('Resize qcow2 image to %sM', disk_size) __salt__['cmd.run']( - 'qemu-img resize {0} {1}M' + 'qemu-img resize "{0}" {1}M' .format(img_dest, disk_size) ) @@ -800,7 +800,7 @@ def _qemu_image_create(disk, create_overlay=False, saltenv='base'): if disk_size: log.debug('Create empty image with size %sM', disk_size) __salt__['cmd.run']( - 'qemu-img create -f {0} {1} {2}M' + 'qemu-img create -f {0} "{1}" {2}M' .format(disk.get('format', 'qcow2'), img_dest, disk_size) ) else: diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index bbe8d813d7..cc62b67918 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -1106,7 +1106,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): with patch.dict(virt.__salt__, {'cmd.run': mock_run}): # pylint: disable=no-member # Ensure the init() function allows creating VM without NIC and disk - virt.init('testvm', + virt.init('test vm', 2, 1234, nic=None, @@ -1120,7 +1120,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): # Test case creating disks defineMock.reset_mock() mock_run.reset_mock() - virt.init('testvm', + virt.init('test vm', 2, 1234, nic=None, @@ -1134,10 +1134,10 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): definition = ET.fromstring(defineMock.call_args_list[0][0][0]) disk_sources = [disk.find('source').get('file') if disk.find('source') is not None else None for disk in definition.findall('./devices/disk')] - expected_disk_path = os.path.join(root_dir, 'testvm_system.qcow2') + expected_disk_path = os.path.join(root_dir, 'test vm_system.qcow2') self.assertEqual(disk_sources, [expected_disk_path, None]) self.assertEqual(mock_run.call_args[0][0], - 'qemu-img create -f qcow2 {0} 10240M'.format(expected_disk_path)) + 'qemu-img create -f qcow2 "{0}" 10240M'.format(expected_disk_path)) self.assertEqual(mock_chmod.call_args[0][0], expected_disk_path) def test_update(self): @@ -1147,7 +1147,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): root_dir = os.path.join(salt.syspaths.ROOT_DIR, 'srv', 'salt-images') xml = ''' <domain type='kvm' id='7'> - <name>myvm</name> + <name>my vm</name> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <vcpu placement='auto'>1</vcpu> @@ -1157,7 +1157,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): <devices> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> - <source file='{0}{1}myvm_system.qcow2'/> + <source file='{0}{1}my vm_system.qcow2'/> <backingStore/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> @@ -1165,7 +1165,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> - <source file='{0}{1}myvm_data.qcow2'/> + <source file='{0}{1}my vm_data.qcow2'/> <backingStore/> <target dev='vdb' bus='virtio'/> <alias name='virtio-disk1'/> @@ -1198,7 +1198,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): </devices> </domain> '''.format(root_dir, os.sep) - domain_mock = self.set_mock_vm('myvm', xml) + domain_mock = self.set_mock_vm('my vm', xml) domain_mock.OSType = MagicMock(return_value='hvm') define_mock = MagicMock(return_value=True) self.mock_conn.defineXML = define_mock @@ -1211,7 +1211,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): 'cpu': True, 'disk': {'attached': [], 'detached': []}, 'interface': {'attached': [], 'detached': []} - }, virt.update('myvm', cpu=2)) + }, virt.update('my vm', cpu=2)) setxml = ET.fromstring(define_mock.call_args[0][0]) self.assertEqual(setxml.find('vcpu').text, '2') self.assertEqual(setvcpus_mock.call_args[0][0], 2) @@ -1225,7 +1225,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): 'mem': True, 'disk': {'attached': [], 'detached': []}, 'interface': {'attached': [], 'detached': []} - }, virt.update('myvm', mem=2048)) + }, virt.update('my vm', mem=2048)) setxml = ET.fromstring(define_mock.call_args[0][0]) self.assertEqual(setxml.find('memory').text, '2048') self.assertEqual(setxml.find('memory').get('unit'), 'MiB') @@ -1240,21 +1240,21 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): mock_run = MagicMock() with patch.dict(os.__dict__, {'chmod': mock_chmod, 'makedirs': MagicMock()}): # pylint: disable=no-member with patch.dict(virt.__salt__, {'cmd.run': mock_run}): # pylint: disable=no-member - ret = virt.update('myvm', disk_profile='default', disks=[ + ret = virt.update('my vm', disk_profile='default', disks=[ {'name': 'cddrive', 'device': 'cdrom', 'source_file': None, 'model': 'ide'}, {'name': 'added', 'size': 2048}]) added_disk_path = os.path.join( - virt.__salt__['config.get']('virt:images'), 'myvm_added.qcow2') # pylint: disable=no-member + virt.__salt__['config.get']('virt:images'), 'my vm_added.qcow2') # pylint: disable=no-member self.assertEqual(mock_run.call_args[0][0], - 'qemu-img create -f qcow2 {0} 2048M'.format(added_disk_path)) + 'qemu-img create -f qcow2 "{0}" 2048M'.format(added_disk_path)) self.assertEqual(mock_chmod.call_args[0][0], added_disk_path) self.assertListEqual( - [None, os.path.join(root_dir, 'myvm_added.qcow2')], + [None, os.path.join(root_dir, 'my vm_added.qcow2')], [ET.fromstring(disk).find('source').get('file') if str(disk).find('<source') > -1 else None for disk in ret['disk']['attached']]) self.assertListEqual( - [os.path.join(root_dir, 'myvm_data.qcow2')], + [os.path.join(root_dir, 'my vm_data.qcow2')], [ET.fromstring(disk).find('source').get('file') for disk in ret['disk']['detached']]) self.assertEqual(devattach_mock.call_count, 2) devdetach_mock.assert_called_once() @@ -1271,7 +1271,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): devattach_mock.reset_mock() devdetach_mock.reset_mock() with patch.dict(salt.modules.config.__opts__, mock_config): # pylint: disable=no-member - ret = virt.update('myvm', nic_profile='myprofile', + ret = virt.update('my vm', nic_profile='myprofile', interfaces=[{'name': 'eth0', 'type': 'network', 'source': 'default', 'mac': '52:54:00:39:02:b1'}, {'name': 'eth1', 'type': 'network', 'source': 'newnet'}]) @@ -1285,7 +1285,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): # Remove nics case devattach_mock.reset_mock() devdetach_mock.reset_mock() - ret = virt.update('myvm', nic_profile=None, interfaces=[]) + ret = virt.update('my vm', nic_profile=None, interfaces=[]) self.assertEqual([], ret['interface']['attached']) self.assertEqual(2, len(ret['interface']['detached'])) devattach_mock.assert_not_called() @@ -1294,7 +1294,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): # Remove disks case (yeah, it surely is silly) devattach_mock.reset_mock() devdetach_mock.reset_mock() - ret = virt.update('myvm', disk_profile=None, disks=[]) + ret = virt.update('my vm', disk_profile=None, disks=[]) self.assertEqual([], ret['disk']['attached']) self.assertEqual(2, len(ret['disk']['detached'])) devattach_mock.assert_not_called() @@ -1305,7 +1305,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): 'definition': True, 'disk': {'attached': [], 'detached': []}, 'interface': {'attached': [], 'detached': []} - }, virt.update('myvm', graphics={'type': 'vnc'})) + }, virt.update('my vm', graphics={'type': 'vnc'})) setxml = ET.fromstring(define_mock.call_args[0][0]) self.assertEqual('vnc', setxml.find('devices/graphics').get('type')) @@ -1314,7 +1314,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): 'definition': False, 'disk': {'attached': [], 'detached': []}, 'interface': {'attached': [], 'detached': []} - }, virt.update('myvm', cpu=1, mem=1024, + }, virt.update('my vm', cpu=1, mem=1024, disk_profile='default', disks=[{'name': 'data', 'size': 2048}], nic_profile='myprofile', interfaces=[{'name': 'eth0', 'type': 'network', 'source': 'default', @@ -1328,7 +1328,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): self.mock_conn.defineXML.side_effect = self.mock_libvirt.libvirtError("Test error") setmem_mock.reset_mock() with self.assertRaises(self.mock_libvirt.libvirtError): - virt.update('myvm', mem=2048) + virt.update('my vm', mem=2048) # Failed single update failure case self.mock_conn.defineXML = MagicMock(return_value=True) @@ -1338,7 +1338,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): 'errors': ['Failed to live change memory'], 'disk': {'attached': [], 'detached': []}, 'interface': {'attached': [], 'detached': []} - }, virt.update('myvm', mem=2048)) + }, virt.update('my vm', mem=2048)) # Failed multiple updates failure case self.assertEqual({ @@ -1347,7 +1347,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): 'cpu': True, 'disk': {'attached': [], 'detached': []}, 'interface': {'attached': [], 'detached': []} - }, virt.update('myvm', cpu=4, mem=2048)) + }, virt.update('my vm', cpu=4, mem=2048)) def test_mixed_dict_and_list_as_profile_objects(self): ''' -- 2.21.0 ++++++ virt.network_define-allow-adding-ip-configuration.patch ++++++ ++++ 2074 lines (skipped) ++++++ virt.pool_running-fix-pool-start.patch ++++++
From 946dd98e911e62c7bc3bcdd8adc8a170645c981c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat@suse.com> Date: Wed, 6 Jun 2018 09:49:36 +0200 Subject: [PATCH] virt.pool_running: fix pool start
Building a libvirt pool starts it. When defining a new pool, we need to let build start it or we will get libvirt errors. Also backport virt states test to add test for the bug: cherry picked from commits: - 451e7da55bd232546c4d30ec36d432de2d5a14ec - 495db345a570cb14cd9b0ae96e1bb0f3fad6aef0 - cb00a5f9b4c9a2a863da3c1107ca6458a4092c3d - fc75872fb63e254eecc782168ff8b37157d9e514 - 2a5f6ae5d69be71daeab6c9cbe4dd642255ff3c6 - 2463ebe5a82b1a017004e8e0e390535485dc703e - c7c5d6ee88fbc74d0ee0aeab41beb421d8625f05 --- salt/states/virt.py | 7 +- tests/unit/states/test_virt.py | 508 ++++++++++++++++++++++++++++++++- 2 files changed, 508 insertions(+), 7 deletions(-) diff --git a/salt/states/virt.py b/salt/states/virt.py index 90693880df..d411f864cd 100644 --- a/salt/states/virt.py +++ b/salt/states/virt.py @@ -780,7 +780,7 @@ def pool_running(name, source_name=(source or {}).get('name', None), source_format=(source or {}).get('format', None), transient=transient, - start=True, + start=False, connection=connection, username=username, password=password) @@ -795,11 +795,6 @@ def pool_running(name, connection=connection, username=username, password=password) - - __salt__['virt.pool_start'](name, - connection=connection, - username=username, - password=password) ret['changes'][name] = 'Pool defined and started' ret['comment'] = 'Pool {0} defined and started'.format(name) except libvirt.libvirtError as err: diff --git a/tests/unit/states/test_virt.py b/tests/unit/states/test_virt.py index 2e421319ad..8022989937 100644 --- a/tests/unit/states/test_virt.py +++ b/tests/unit/states/test_virt.py @@ -21,6 +21,25 @@ from tests.support.mock import ( # Import Salt Libs import salt.states.virt as virt import salt.utils.files +from salt.exceptions import CommandExecutionError + +# Import 3rd-party libs +from salt.ext import six + + +class LibvirtMock(MagicMock): # pylint: disable=too-many-ancestors + ''' + libvirt library mockup + ''' + class libvirtError(Exception): # pylint: disable=invalid-name + ''' + libvirt error mockup + ''' + def get_error_message(self): + ''' + Fake function return error message + ''' + return six.text_type(self) @skipIf(NO_MOCK, NO_MOCK_REASON) @@ -29,7 +48,12 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): Test cases for salt.states.libvirt ''' def setup_loader_modules(self): - return {virt: {}} + self.mock_libvirt = LibvirtMock() # pylint: disable=attribute-defined-outside-init + self.addCleanup(delattr, self, 'mock_libvirt') + loader_globals = { + 'libvirt': self.mock_libvirt + } + return {virt: loader_globals} @classmethod def setUpClass(cls): @@ -195,3 +219,485 @@ class LibvirtTestCase(TestCase, LoaderModuleMockMixin): locality='Los_Angeles', organization='SaltStack', expiration_days=700), ret) + + def test_running(self): + ''' + running state test cases. + ''' + ret = {'name': 'myvm', + 'changes': {}, + 'result': True, + 'comment': 'myvm is running'} + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.vm_state': MagicMock(return_value='stopped'), + 'virt.start': MagicMock(return_value=0), + }): + ret.update({'changes': {'myvm': 'Domain started'}, + 'comment': 'Domain myvm started'}) + self.assertDictEqual(virt.running('myvm'), ret) + + init_mock = MagicMock(return_value=True) + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.vm_state': MagicMock(side_effect=CommandExecutionError('not found')), + 'virt.init': init_mock, + 'virt.start': MagicMock(return_value=0) + }): + ret.update({'changes': {'myvm': 'Domain defined and started'}, + 'comment': 'Domain myvm defined and started'}) + self.assertDictEqual(virt.running('myvm', + cpu=2, + mem=2048, + image='/path/to/img.qcow2'), ret) + init_mock.assert_called_with('myvm', cpu=2, mem=2048, image='/path/to/img.qcow2', + os_type=None, arch=None, + disk=None, disks=None, nic=None, interfaces=None, + graphics=None, hypervisor=None, + seed=True, install=True, pub_key=None, priv_key=None, + connection=None, username=None, password=None) + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.vm_state': MagicMock(side_effect=CommandExecutionError('not found')), + 'virt.init': init_mock, + 'virt.start': MagicMock(return_value=0) + }): + ret.update({'changes': {'myvm': 'Domain defined and started'}, + 'comment': 'Domain myvm defined and started'}) + disks = [{ + 'name': 'system', + 'size': 8192, + 'overlay_image': True, + 'pool': 'default', + 'image': '/path/to/image.qcow2' + }, + { + 'name': 'data', + 'size': 16834 + }] + ifaces = [{ + 'name': 'eth0', + 'mac': '01:23:45:67:89:AB' + }, + { + 'name': 'eth1', + 'type': 'network', + 'source': 'admin' + }] + graphics = {'type': 'spice', 'listen': {'type': 'address', 'address': '192.168.0.1'}} + self.assertDictEqual(virt.running('myvm', + cpu=2, + mem=2048, + os_type='linux', + arch='i686', + vm_type='qemu', + disk_profile='prod', + disks=disks, + nic_profile='prod', + interfaces=ifaces, + graphics=graphics, + seed=False, + install=False, + pub_key='/path/to/key.pub', + priv_key='/path/to/key', + connection='someconnection', + username='libvirtuser', + password='supersecret'), ret) + init_mock.assert_called_with('myvm', + cpu=2, + mem=2048, + os_type='linux', + arch='i686', + image=None, + disk='prod', + disks=disks, + nic='prod', + interfaces=ifaces, + graphics=graphics, + hypervisor='qemu', + seed=False, + install=False, + pub_key='/path/to/key.pub', + priv_key='/path/to/key', + connection='someconnection', + username='libvirtuser', + password='supersecret') + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.vm_state': MagicMock(return_value='stopped'), + 'virt.start': MagicMock(side_effect=[self.mock_libvirt.libvirtError('libvirt error msg')]) + }): + ret.update({'changes': {}, 'result': False, 'comment': 'libvirt error msg'}) + self.assertDictEqual(virt.running('myvm'), ret) + + # Working update case when running + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.vm_state': MagicMock(return_value='running'), + 'virt.update': MagicMock(return_value={'definition': True, 'cpu': True}) + }): + ret.update({'changes': {'myvm': {'definition': True, 'cpu': True}}, + 'result': True, + 'comment': 'Domain myvm updated, restart to fully apply the changes'}) + self.assertDictEqual(virt.running('myvm', update=True, cpu=2), ret) + + # Working update case when stopped + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.vm_state': MagicMock(return_value='stopped'), + 'virt.start': MagicMock(return_value=0), + 'virt.update': MagicMock(return_value={'definition': True}) + }): + ret.update({'changes': {'myvm': 'Domain updated and started'}, + 'result': True, + 'comment': 'Domain myvm updated and started'}) + self.assertDictEqual(virt.running('myvm', update=True, cpu=2), ret) + + # Failed live update case + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.vm_state': MagicMock(return_value='running'), + 'virt.update': MagicMock(return_value={'definition': True, 'cpu': False, 'errors': ['some error']}) + }): + ret.update({'changes': {'myvm': {'definition': True, 'cpu': False, 'errors': ['some error']}}, + 'result': True, + 'comment': 'Domain myvm updated, but some live update(s) failed'}) + self.assertDictEqual(virt.running('myvm', update=True, cpu=2), ret) + + # Failed definition update case + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.vm_state': MagicMock(return_value='running'), + 'virt.update': MagicMock(side_effect=[self.mock_libvirt.libvirtError('error message')]) + }): + ret.update({'changes': {}, + 'result': False, + 'comment': 'error message'}) + self.assertDictEqual(virt.running('myvm', update=True, cpu=2), ret) + + def test_stopped(self): + ''' + stopped state test cases. + ''' + ret = {'name': 'myvm', + 'changes': {}, + 'result': True} + + shutdown_mock = MagicMock(return_value=True) + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.shutdown': shutdown_mock + }): + ret.update({'changes': { + 'stopped': [{'domain': 'myvm', 'shutdown': True}] + }, + 'comment': 'Machine has been shut down'}) + self.assertDictEqual(virt.stopped('myvm'), ret) + shutdown_mock.assert_called_with('myvm', connection=None, username=None, password=None) + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.shutdown': shutdown_mock, + }): + self.assertDictEqual(virt.stopped('myvm', + connection='myconnection', + username='user', + password='secret'), ret) + shutdown_mock.assert_called_with('myvm', connection='myconnection', username='user', password='secret') + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.shutdown': MagicMock(side_effect=self.mock_libvirt.libvirtError('Some error')) + }): + ret.update({'changes': {'ignored': [{'domain': 'myvm', 'issue': 'Some error'}]}, + 'result': False, + 'comment': 'No changes had happened'}) + self.assertDictEqual(virt.stopped('myvm'), ret) + + with patch.dict(virt.__salt__, {'virt.list_domains': MagicMock(return_value=[])}): # pylint: disable=no-member + ret.update({'changes': {}, 'result': False, 'comment': 'No changes had happened'}) + self.assertDictEqual(virt.stopped('myvm'), ret) + + def test_powered_off(self): + ''' + powered_off state test cases. + ''' + ret = {'name': 'myvm', + 'changes': {}, + 'result': True} + + stop_mock = MagicMock(return_value=True) + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.stop': stop_mock + }): + ret.update({'changes': { + 'unpowered': [{'domain': 'myvm', 'stop': True}] + }, + 'comment': 'Machine has been powered off'}) + self.assertDictEqual(virt.powered_off('myvm'), ret) + stop_mock.assert_called_with('myvm', connection=None, username=None, password=None) + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.stop': stop_mock, + }): + self.assertDictEqual(virt.powered_off('myvm', + connection='myconnection', + username='user', + password='secret'), ret) + stop_mock.assert_called_with('myvm', connection='myconnection', username='user', password='secret') + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.stop': MagicMock(side_effect=self.mock_libvirt.libvirtError('Some error')) + }): + ret.update({'changes': {'ignored': [{'domain': 'myvm', 'issue': 'Some error'}]}, + 'result': False, + 'comment': 'No changes had happened'}) + self.assertDictEqual(virt.powered_off('myvm'), ret) + + with patch.dict(virt.__salt__, {'virt.list_domains': MagicMock(return_value=[])}): # pylint: disable=no-member + ret.update({'changes': {}, 'result': False, 'comment': 'No changes had happened'}) + self.assertDictEqual(virt.powered_off('myvm'), ret) + + def test_snapshot(self): + ''' + snapshot state test cases. + ''' + ret = {'name': 'myvm', + 'changes': {}, + 'result': True} + + snapshot_mock = MagicMock(return_value=True) + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.snapshot': snapshot_mock + }): + ret.update({'changes': { + 'saved': [{'domain': 'myvm', 'snapshot': True}] + }, + 'comment': 'Snapshot has been taken'}) + self.assertDictEqual(virt.snapshot('myvm'), ret) + snapshot_mock.assert_called_with('myvm', suffix=None, connection=None, username=None, password=None) + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.snapshot': snapshot_mock, + }): + self.assertDictEqual(virt.snapshot('myvm', + suffix='snap', + connection='myconnection', + username='user', + password='secret'), ret) + snapshot_mock.assert_called_with('myvm', + suffix='snap', + connection='myconnection', + username='user', + password='secret') + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.snapshot': MagicMock(side_effect=self.mock_libvirt.libvirtError('Some error')) + }): + ret.update({'changes': {'ignored': [{'domain': 'myvm', 'issue': 'Some error'}]}, + 'result': False, + 'comment': 'No changes had happened'}) + self.assertDictEqual(virt.snapshot('myvm'), ret) + + with patch.dict(virt.__salt__, {'virt.list_domains': MagicMock(return_value=[])}): # pylint: disable=no-member + ret.update({'changes': {}, 'result': False, 'comment': 'No changes had happened'}) + self.assertDictEqual(virt.snapshot('myvm'), ret) + + def test_rebooted(self): + ''' + rebooted state test cases. + ''' + ret = {'name': 'myvm', + 'changes': {}, + 'result': True} + + reboot_mock = MagicMock(return_value=True) + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.reboot': reboot_mock + }): + ret.update({'changes': { + 'rebooted': [{'domain': 'myvm', 'reboot': True}] + }, + 'comment': 'Machine has been rebooted'}) + self.assertDictEqual(virt.rebooted('myvm'), ret) + reboot_mock.assert_called_with('myvm', connection=None, username=None, password=None) + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.reboot': reboot_mock, + }): + self.assertDictEqual(virt.rebooted('myvm', + connection='myconnection', + username='user', + password='secret'), ret) + reboot_mock.assert_called_with('myvm', + connection='myconnection', + username='user', + password='secret') + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.list_domains': MagicMock(return_value=['myvm', 'vm1']), + 'virt.reboot': MagicMock(side_effect=self.mock_libvirt.libvirtError('Some error')) + }): + ret.update({'changes': {'ignored': [{'domain': 'myvm', 'issue': 'Some error'}]}, + 'result': False, + 'comment': 'No changes had happened'}) + self.assertDictEqual(virt.rebooted('myvm'), ret) + + with patch.dict(virt.__salt__, {'virt.list_domains': MagicMock(return_value=[])}): # pylint: disable=no-member + ret.update({'changes': {}, 'result': False, 'comment': 'No changes had happened'}) + self.assertDictEqual(virt.rebooted('myvm'), ret) + + def test_network_running(self): + ''' + network_running state test cases. + ''' + ret = {'name': 'mynet', 'changes': {}, 'result': True, 'comment': ''} + define_mock = MagicMock(return_value=True) + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.network_info': MagicMock(return_value={}), + 'virt.network_define': define_mock + }): + ret.update({'changes': {'mynet': 'Network defined and started'}, + 'comment': 'Network mynet defined and started'}) + self.assertDictEqual(virt.network_running('mynet', + 'br2', + 'bridge', + vport='openvswitch', + tag=180, + autostart=False, + connection='myconnection', + username='user', + password='secret'), ret) + define_mock.assert_called_with('mynet', + 'br2', + 'bridge', + 'openvswitch', + tag=180, + autostart=False, + start=True, + connection='myconnection', + username='user', + password='secret') + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.network_info': MagicMock(return_value={'active': True}), + 'virt.network_define': define_mock, + }): + ret.update({'changes': {}, 'comment': 'Network mynet exists and is running'}) + self.assertDictEqual(virt.network_running('mynet', 'br2', 'bridge'), ret) + + start_mock = MagicMock(return_value=True) + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.network_info': MagicMock(return_value={'active': False}), + 'virt.network_start': start_mock, + 'virt.network_define': define_mock, + }): + ret.update({'changes': {'mynet': 'Network started'}, 'comment': 'Network mynet started'}) + self.assertDictEqual(virt.network_running('mynet', + 'br2', + 'bridge', + connection='myconnection', + username='user', + password='secret'), ret) + start_mock.assert_called_with('mynet', connection='myconnection', username='user', password='secret') + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.network_info': MagicMock(return_value={}), + 'virt.network_define': MagicMock(side_effect=self.mock_libvirt.libvirtError('Some error')) + }): + ret.update({'changes': {}, 'comment': 'Some error', 'result': False}) + self.assertDictEqual(virt.network_running('mynet', 'br2', 'bridge'), ret) + + def test_pool_running(self): + ''' + pool_running state test cases. + ''' + ret = {'name': 'mypool', 'changes': {}, 'result': True, 'comment': ''} + mocks = {mock: MagicMock(return_value=True) for mock in ['define', 'autostart', 'build', 'start']} + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.pool_info': MagicMock(return_value={}), + 'virt.pool_define': mocks['define'], + 'virt.pool_build': mocks['build'], + 'virt.pool_start': mocks['start'], + 'virt.pool_set_autostart': mocks['autostart'] + }): + ret.update({'changes': {'mypool': 'Pool defined and started'}, + 'comment': 'Pool mypool defined and started'}) + self.assertDictEqual(virt.pool_running('mypool', + ptype='logical', + target='/dev/base', + permissions={'mode': '0770', + 'owner': 1000, + 'group': 100, + 'label': 'seclabel'}, + source={'devices': [{'path': '/dev/sda'}]}, + transient=True, + autostart=True, + connection='myconnection', + username='user', + password='secret'), ret) + mocks['define'].assert_called_with('mypool', + ptype='logical', + target='/dev/base', + permissions={'mode': '0770', + 'owner': 1000, + 'group': 100, + 'label': 'seclabel'}, + source_devices=[{'path': '/dev/sda'}], + source_dir=None, + source_adapter=None, + source_hosts=None, + source_auth=None, + source_name=None, + source_format=None, + transient=True, + start=False, + connection='myconnection', + username='user', + password='secret') + mocks['autostart'].assert_called_with('mypool', + state='on', + connection='myconnection', + username='user', + password='secret') + mocks['build'].assert_called_with('mypool', + connection='myconnection', + username='user', + password='secret') + mocks['start'].assert_not_called() + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.pool_info': MagicMock(return_value={'state': 'running'}), + }): + ret.update({'changes': {}, 'comment': 'Pool mypool exists and is running'}) + self.assertDictEqual(virt.pool_running('mypool', + ptype='logical', + target='/dev/base', + source={'devices': [{'path': '/dev/sda'}]}), ret) + + for mock in mocks: + mocks[mock].reset_mock() + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.pool_info': MagicMock(return_value={'state': 'stopped'}), + 'virt.pool_build': mocks['build'], + 'virt.pool_start': mocks['start'] + }): + ret.update({'changes': {'mypool': 'Pool started'}, 'comment': 'Pool mypool started'}) + self.assertDictEqual(virt.pool_running('mypool', + ptype='logical', + target='/dev/base', + source={'devices': [{'path': '/dev/sda'}]}), ret) + mocks['start'].assert_called_with('mypool', connection=None, username=None, password=None) + mocks['build'].assert_not_called() + + with patch.dict(virt.__salt__, { # pylint: disable=no-member + 'virt.pool_info': MagicMock(return_value={}), + 'virt.pool_define': MagicMock(side_effect=self.mock_libvirt.libvirtError('Some error')) + }): + ret.update({'changes': {}, 'comment': 'Some error', 'result': False}) + self.assertDictEqual(virt.pool_running('mypool', + ptype='logical', + target='/dev/base', + source={'devices': [{'path': '/dev/sda'}]}), ret) -- 2.21.0 ++++++ virt.volume_infos-fix-for-single-vm.patch ++++++
From b0b5a78a463f7587be4f81074b182d1f4b4461be Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?C=C3=A9dric=20Bosdonnat?= <cbosdonnat@suse.com> Date: Thu, 4 Apr 2019 16:18:58 +0200 Subject: [PATCH] virt.volume_infos fix for single VM
_get_domain returns a domain object when only one VM has been found. virt.volume_infos needs to take care of it or it will fail to list the volumes informations if the host in such a case. --- salt/modules/virt.py | 4 ++- tests/unit/modules/test_virt.py | 46 +++++++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+), 1 deletion(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index 17039444c4..a3f625909d 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -5047,10 +5047,12 @@ def volume_infos(pool=None, volume=None, **kwargs): conn = __get_conn(**kwargs) try: backing_stores = _get_all_volumes_paths(conn) + domains = _get_domain(conn) + domains_list = domains if isinstance(domains, list) else [domains] disks = {domain.name(): {node.get('file') for node in ElementTree.fromstring(domain.XMLDesc(0)).findall('.//disk/source/[@file]')} - for domain in _get_domain(conn)} + for domain in domains_list} def _volume_extract_infos(vol): ''' diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index 14e51e1e2a..bbe8d813d7 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -2864,6 +2864,52 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): } }) + # Single VM test + with patch('salt.modules.virt._get_domain', MagicMock(return_value=mock_vms[0])): + actual = virt.volume_infos('pool0', 'vol0') + self.assertEqual(1, len(actual.keys())) + self.assertEqual(1, len(actual['pool0'].keys())) + self.assertEqual(['vm0'], sorted(actual['pool0']['vol0']['used_by'])) + self.assertEqual('/path/to/vol0.qcow2', actual['pool0']['vol0']['path']) + self.assertEqual('file', actual['pool0']['vol0']['type']) + self.assertEqual('/key/of/vol0', actual['pool0']['vol0']['key']) + self.assertEqual(123456789, actual['pool0']['vol0']['capacity']) + self.assertEqual(123456, actual['pool0']['vol0']['allocation']) + + self.assertEqual(virt.volume_infos('pool1', None), { + 'pool1': { + 'vol1': { + 'type': 'file', + 'key': '/key/of/vol1', + 'path': '/path/to/vol1.qcow2', + 'capacity': 12345, + 'allocation': 1234, + 'used_by': [], + }, + 'vol2': { + 'type': 'file', + 'key': '/key/of/vol2', + 'path': '/path/to/vol2.qcow2', + 'capacity': 12345, + 'allocation': 1234, + 'used_by': [], + } + } + }) + + self.assertEqual(virt.volume_infos(None, 'vol2'), { + 'pool1': { + 'vol2': { + 'type': 'file', + 'key': '/key/of/vol2', + 'path': '/path/to/vol2.qcow2', + 'capacity': 12345, + 'allocation': 1234, + 'used_by': [], + } + } + }) + def test_volume_delete(self): ''' Test virt.volume_delete -- 2.21.0 ++++++ virt.volume_infos-needs-to-ignore-inactive-pools-174.patch ++++++
From df1caa8fa6551f880202649a7f4133343da5da0f Mon Sep 17 00:00:00 2001 From: Cedric Bosdonnat <cbosdonnat@suse.com> Date: Tue, 3 Sep 2019 15:17:38 +0200 Subject: [PATCH] virt.volume_infos needs to ignore inactive pools (#174)
libvirt raises an error when getting the list of volumes of a pool that is not active. Rule out those pools from virt.volume_infos since we still need to give infos on the other pools' volumes. --- salt/modules/virt.py | 7 +++++-- tests/unit/modules/test_virt.py | 9 +++++++++ 2 files changed, 14 insertions(+), 2 deletions(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index 953064cc2c..0353e6a1f5 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -5021,7 +5021,9 @@ def _get_all_volumes_paths(conn): :param conn: libvirt connection to use ''' - volumes = [vol for l in [obj.listAllVolumes() for obj in conn.listAllStoragePools()] for vol in l] + volumes = [vol for l in + [obj.listAllVolumes() for obj in conn.listAllStoragePools() + if obj.info()[0] == libvirt.VIR_STORAGE_POOL_RUNNING] for vol in l] return {vol.path(): [path.text for path in ElementTree.fromstring(vol.XMLDesc()).findall('.//backingStore/path')] for vol in volumes if _is_valid_volume(vol)} @@ -5086,7 +5088,8 @@ def volume_infos(pool=None, volume=None, **kwargs): 'used_by': used_by, } - pools = [obj for obj in conn.listAllStoragePools() if pool is None or obj.name() == pool] + pools = [obj for obj in conn.listAllStoragePools() + if (pool is None or obj.name() == pool) and obj.info()[0] == libvirt.VIR_STORAGE_POOL_RUNNING] vols = {pool_obj.name(): {vol.name(): _volume_extract_infos(vol) for vol in pool_obj.listAllVolumes() if (volume is None or vol.name() == volume) and _is_valid_volume(vol)} diff --git a/tests/unit/modules/test_virt.py b/tests/unit/modules/test_virt.py index b343b9bc31..e644e62452 100644 --- a/tests/unit/modules/test_virt.py +++ b/tests/unit/modules/test_virt.py @@ -2743,6 +2743,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): mock_pool_data = [ { 'name': 'pool0', + 'state': self.mock_libvirt.VIR_STORAGE_POOL_RUNNING, 'volumes': [ { 'key': '/key/of/vol0', @@ -2755,6 +2756,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): }, { 'name': 'pool1', + 'state': self.mock_libvirt.VIR_STORAGE_POOL_RUNNING, 'volumes': [ { 'key': '/key/of/vol0bad', @@ -2784,6 +2786,7 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): for pool_data in mock_pool_data: mock_pool = MagicMock() mock_pool.name.return_value = pool_data['name'] # pylint: disable=no-member + mock_pool.info.return_value = [pool_data['state']] mock_volumes = [] for vol_data in pool_data['volumes']: mock_volume = MagicMock() @@ -2817,6 +2820,12 @@ class VirtTestCase(TestCase, LoaderModuleMockMixin): mock_pool.listAllVolumes.return_value = mock_volumes # pylint: disable=no-member mock_pools.append(mock_pool) + inactive_pool = MagicMock() + inactive_pool.name.return_value = 'pool2' + inactive_pool.info.return_value = [self.mock_libvirt.VIR_STORAGE_POOL_INACTIVE] + inactive_pool.listAllVolumes.side_effect = self.mock_libvirt.libvirtError('pool is inactive') + mock_pools.append(inactive_pool) + self.mock_conn.listAllStoragePools.return_value = mock_pools # pylint: disable=no-member with patch('salt.modules.virt._get_domain', MagicMock(return_value=mock_vms)): -- 2.20.1 ++++++ virt.volume_infos-silence-libvirt-error-message-175.patch ++++++
From fa621d17371ea6c8eff75460755c0040fcbf13de Mon Sep 17 00:00:00 2001 From: Cedric Bosdonnat <cbosdonnat@suse.com> Date: Tue, 3 Sep 2019 15:17:46 +0200 Subject: [PATCH] virt.volume_infos: silence libvirt error message (#175)
Even though the volume_infos handles the libvirt exception when a volume is missing, libvirt was still outputting the error message in the log. Since this can add noise to the log only record the libvirt error message in debug level. --- salt/modules/virt.py | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/salt/modules/virt.py b/salt/modules/virt.py index 0353e6a1f5..96c17bd60b 100644 --- a/salt/modules/virt.py +++ b/salt/modules/virt.py @@ -5008,8 +5008,14 @@ def _is_valid_volume(vol): the last pool refresh. ''' try: - # Getting info on an invalid volume raises error + # Getting info on an invalid volume raises error and libvirt logs an error + def discarder(ctxt, error): # pylint: disable=unused-argument + log.debug("Ignore libvirt error: %s", error[2]) + # Disable the libvirt error logging + libvirt.registerErrorHandler(discarder, None) vol.info() + # Reenable the libvirt error logging + libvirt.registerErrorHandler(None, None) return True except libvirt.libvirtError as err: return False -- 2.20.1 ++++++ x509-fixes-111.patch ++++++
From c5adc0c126e593d12c9b18bcf60f96336c75e4a8 Mon Sep 17 00:00:00 2001 From: Florian Bergmann <bergmannf@users.noreply.github.com> Date: Fri, 14 Sep 2018 10:30:39 +0200 Subject: [PATCH] X509 fixes (#111)
* Return proper content type for the x509 certificate * Remove parenthesis * Remove extra-variables during the import * Comment fix * Remove double returns * Change log level from trace to debug * Remove 'pass' and add logging instead * Remove unnecessary wrapping Remove wrapping * PEP 8: line too long PEP8: line too long * PEP8: Redefine RSAError variable in except clause * Do not return None if name was not found * Do not return None if no matched minions found * Fix unit tests --- salt/modules/publish.py | 8 +-- salt/modules/x509.py | 132 ++++++++++++++++------------------------ salt/states/x509.py | 22 ++++--- 3 files changed, 69 insertions(+), 93 deletions(-) diff --git a/salt/modules/publish.py b/salt/modules/publish.py index 62e3e98f2f..fda848d1ec 100644 --- a/salt/modules/publish.py +++ b/salt/modules/publish.py @@ -82,10 +82,8 @@ def _publish( in minion configuration but `via_master` was specified.') else: # Find the master in the list of master_uris generated by the minion base class - matching_master_uris = [master for master - in __opts__['master_uri_list'] - if '//{0}:'.format(via_master) - in master] + matching_master_uris = [master for master in __opts__['master_uri_list'] + if '//{0}:'.format(via_master) in master] if not matching_master_uris: raise SaltInvocationError('Could not find match for {0} in \ @@ -178,6 +176,8 @@ def _publish( finally: channel.close() + return {} + def publish(tgt, fun, diff --git a/salt/modules/x509.py b/salt/modules/x509.py index 8689bfad35..4126f34960 100644 --- a/salt/modules/x509.py +++ b/salt/modules/x509.py @@ -38,14 +38,13 @@ from salt.state import STATE_INTERNAL_KEYWORDS as _STATE_INTERNAL_KEYWORDS # Import 3rd Party Libs try: import M2Crypto - HAS_M2 = True except ImportError: - HAS_M2 = False + M2Crypto = None + try: import OpenSSL - HAS_OPENSSL = True except ImportError: - HAS_OPENSSL = False + OpenSSL = None __virtualname__ = 'x509' @@ -83,10 +82,7 @@ def __virtual__(): ''' only load this module if m2crypto is available ''' - if HAS_M2: - return __virtualname__ - else: - return (False, 'Could not load x509 module, m2crypto unavailable') + return __virtualname__ if M2Crypto is not None else False, 'Could not load x509 module, m2crypto unavailable' class _Ctx(ctypes.Structure): @@ -129,10 +125,8 @@ def _new_extension(name, value, critical=0, issuer=None, _pyfree=1): doesn't support getting the publickeyidentifier from the issuer to create the authoritykeyidentifier extension. ''' - if name == 'subjectKeyIdentifier' and \ - value.strip('0123456789abcdefABCDEF:') is not '': - raise salt.exceptions.SaltInvocationError( - 'value must be precomputed hash') + if name == 'subjectKeyIdentifier' and value.strip('0123456789abcdefABCDEF:') is not '': + raise salt.exceptions.SaltInvocationError('value must be precomputed hash') # ensure name and value are bytes name = salt.utils.stringutils.to_str(name) @@ -147,7 +141,7 @@ def _new_extension(name, value, critical=0, issuer=None, _pyfree=1): x509_ext_ptr = M2Crypto.m2.x509v3_ext_conf(None, ctx, name, value) lhash = None except AttributeError: - lhash = M2Crypto.m2.x509v3_lhash() # pylint: disable=no-member + lhash = M2Crypto.m2.x509v3_lhash() # pylint: disable=no-member ctx = M2Crypto.m2.x509v3_set_conf_lhash( lhash) # pylint: disable=no-member # ctx not zeroed @@ -198,10 +192,8 @@ def _get_csr_extensions(csr): csrtempfile.flush() csryaml = _parse_openssl_req(csrtempfile.name) csrtempfile.close() - if csryaml and 'Requested Extensions' in \ - csryaml['Certificate Request']['Data']: - csrexts = \ - csryaml['Certificate Request']['Data']['Requested Extensions'] + if csryaml and 'Requested Extensions' in csryaml['Certificate Request']['Data']: + csrexts = csryaml['Certificate Request']['Data']['Requested Extensions'] if not csrexts: return ret @@ -296,7 +288,7 @@ def _get_signing_policy(name): signing_policy = policies.get(name) if signing_policy: return signing_policy - return __salt__['config.get']('x509_signing_policies', {}).get(name) + return __salt__['config.get']('x509_signing_policies', {}).get(name) or {} def _pretty_hex(hex_str): @@ -335,9 +327,11 @@ def _text_or_file(input_): ''' if _isfile(input_): with salt.utils.files.fopen(input_) as fp_: - return salt.utils.stringutils.to_str(fp_.read()) + out = salt.utils.stringutils.to_str(fp_.read()) else: - return salt.utils.stringutils.to_str(input_) + out = salt.utils.stringutils.to_str(input_) + + return out def _parse_subject(subject): @@ -355,7 +349,7 @@ def _parse_subject(subject): ret[nid_name] = val nids.append(nid_num) except TypeError as err: - log.trace("Missing attribute '%s'. Error: %s", nid_name, err) + log.debug("Missing attribute '%s'. Error: %s", nid_name, err) return ret @@ -533,8 +527,8 @@ def get_pem_entries(glob_path): if os.path.isfile(path): try: ret[path] = get_pem_entry(text=path) - except ValueError: - pass + except ValueError as err: + log.debug('Unable to get PEM entries from %s: %s', path, err) return ret @@ -612,8 +606,8 @@ def read_certificates(glob_path): if os.path.isfile(path): try: ret[path] = read_certificate(certificate=path) - except ValueError: - pass + except ValueError as err: + log.debug('Unable to read certificate %s: %s', path, err) return ret @@ -642,12 +636,10 @@ def read_csr(csr): # Get size returns in bytes. The world thinks of key sizes in bits. 'Subject': _parse_subject(csr.get_subject()), 'Subject Hash': _dec2hex(csr.get_subject().as_hash()), - 'Public Key Hash': hashlib.sha1(csr.get_pubkey().get_modulus())\ - .hexdigest() + 'Public Key Hash': hashlib.sha1(csr.get_pubkey().get_modulus()).hexdigest(), + 'X509v3 Extensions': _get_csr_extensions(csr), } - ret['X509v3 Extensions'] = _get_csr_extensions(csr) - return ret @@ -943,7 +935,7 @@ def create_crl( # pylint: disable=too-many-arguments,too-many-locals # pyOpenSSL Note due to current limitations in pyOpenSSL it is impossible # to specify a digest For signing the CRL. This will hopefully be fixed # soon: https://github.com/pyca/pyopenssl/pull/161 - if not HAS_OPENSSL: + if OpenSSL is None: raise salt.exceptions.SaltInvocationError( 'Could not load OpenSSL module, OpenSSL unavailable' ) @@ -969,8 +961,7 @@ def create_crl( # pylint: disable=too-many-arguments,too-many-locals continue if 'revocation_date' not in rev_item: - rev_item['revocation_date'] = datetime.datetime\ - .now().strftime('%Y-%m-%d %H:%M:%S') + rev_item['revocation_date'] = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S') rev_date = datetime.datetime.strptime( rev_item['revocation_date'], '%Y-%m-%d %H:%M:%S') @@ -1011,8 +1002,9 @@ def create_crl( # pylint: disable=too-many-arguments,too-many-locals try: crltext = crl.export(**export_kwargs) except (TypeError, ValueError): - log.warning( - 'Error signing crl with specified digest. Are you using pyopenssl 0.15 or newer? The default md5 digest will be used.') + log.warning('Error signing crl with specified digest. ' + 'Are you using pyopenssl 0.15 or newer? ' + 'The default md5 digest will be used.') export_kwargs.pop('digest', None) crltext = crl.export(**export_kwargs) @@ -1050,8 +1042,7 @@ def sign_remote_certificate(argdic, **kwargs): if 'signing_policy' in argdic: signing_policy = _get_signing_policy(argdic['signing_policy']) if not signing_policy: - return 'Signing policy {0} does not exist.'.format( - argdic['signing_policy']) + return 'Signing policy {0} does not exist.'.format(argdic['signing_policy']) if isinstance(signing_policy, list): dict_ = {} @@ -1091,6 +1082,7 @@ def get_signing_policy(signing_policy_name): signing_policy = _get_signing_policy(signing_policy_name) if not signing_policy: return 'Signing policy {0} does not exist.'.format(signing_policy_name) + if isinstance(signing_policy, list): dict_ = {} for item in signing_policy: @@ -1103,10 +1095,9 @@ def get_signing_policy(signing_policy_name): pass try: - signing_policy['signing_cert'] = get_pem_entry( - signing_policy['signing_cert'], 'CERTIFICATE') + signing_policy['signing_cert'] = get_pem_entry(signing_policy['signing_cert'], 'CERTIFICATE') except KeyError: - pass + log.debug('Unable to get "certificate" PEM entry') return signing_policy @@ -1356,8 +1347,7 @@ def create_certificate( salt '*' x509.create_certificate path=/etc/pki/myca.crt signing_private_key='/etc/pki/myca.key' csr='/etc/pki/myca.csr'} ''' - if not path and not text and \ - ('testrun' not in kwargs or kwargs['testrun'] is False): + if not path and not text and ('testrun' not in kwargs or kwargs['testrun'] is False): raise salt.exceptions.SaltInvocationError( 'Either path or text must be specified.') if path and text: @@ -1386,8 +1376,7 @@ def create_certificate( # Including listen_in and preqreuired because they are not included # in STATE_INTERNAL_KEYWORDS # for salt 2014.7.2 - for ignore in list(_STATE_INTERNAL_KEYWORDS) + \ - ['listen_in', 'preqrequired', '__prerequired__']: + for ignore in list(_STATE_INTERNAL_KEYWORDS) + ['listen_in', 'preqrequired', '__prerequired__']: kwargs.pop(ignore, None) certs = __salt__['publish.publish']( @@ -1500,8 +1489,7 @@ def create_certificate( continue # Use explicitly set values first, fall back to CSR values. - extval = kwargs.get(extname) or kwargs.get(extlongname) or \ - csrexts.get(extname) or csrexts.get(extlongname) + extval = kwargs.get(extname) or kwargs.get(extlongname) or csrexts.get(extname) or csrexts.get(extlongname) critical = False if extval.startswith('critical '): @@ -1623,8 +1611,8 @@ def create_csr(path=None, text=False, **kwargs): if 'private_key' not in kwargs and 'public_key' in kwargs: kwargs['private_key'] = kwargs['public_key'] - log.warning( - "OpenSSL no longer allows working with non-signed CSRs. A private_key must be specified. Attempting to use public_key as private_key") + log.warning("OpenSSL no longer allows working with non-signed CSRs. " + "A private_key must be specified. Attempting to use public_key as private_key") if 'private_key' not in kwargs: raise salt.exceptions.SaltInvocationError('private_key is required') @@ -1636,11 +1624,9 @@ def create_csr(path=None, text=False, **kwargs): kwargs['private_key_passphrase'] = None if 'public_key_passphrase' not in kwargs: kwargs['public_key_passphrase'] = None - if kwargs['public_key_passphrase'] and not kwargs[ - 'private_key_passphrase']: + if kwargs['public_key_passphrase'] and not kwargs['private_key_passphrase']: kwargs['private_key_passphrase'] = kwargs['public_key_passphrase'] - if kwargs['private_key_passphrase'] and not kwargs[ - 'public_key_passphrase']: + if kwargs['private_key_passphrase'] and not kwargs['public_key_passphrase']: kwargs['public_key_passphrase'] = kwargs['private_key_passphrase'] csr.set_pubkey(get_public_key(kwargs['public_key'], @@ -1684,18 +1670,10 @@ def create_csr(path=None, text=False, **kwargs): extstack.push(ext) csr.add_extensions(extstack) - csr.sign(_get_private_key_obj(kwargs['private_key'], passphrase=kwargs['private_key_passphrase']), kwargs['algorithm']) - if path: - return write_pem( - text=csr.as_pem(), - path=path, - pem_type='CERTIFICATE REQUEST' - ) - else: - return csr.as_pem() + return write_pem(text=csr.as_pem(), path=path, pem_type='CERTIFICATE REQUEST') if path else csr.as_pem() def verify_private_key(private_key, public_key, passphrase=None): @@ -1720,8 +1698,7 @@ def verify_private_key(private_key, public_key, passphrase=None): salt '*' x509.verify_private_key private_key=/etc/pki/myca.key \\ public_key=/etc/pki/myca.crt ''' - return bool(get_public_key(private_key, passphrase) - == get_public_key(public_key)) + return get_public_key(private_key, passphrase) == get_public_key(public_key) def verify_signature(certificate, signing_pub_key=None, @@ -1775,9 +1752,8 @@ def verify_crl(crl, cert): salt '*' x509.verify_crl crl=/etc/pki/myca.crl cert=/etc/pki/myca.crt ''' if not salt.utils.path.which('openssl'): - raise salt.exceptions.SaltInvocationError( - 'openssl binary not found in path' - ) + raise salt.exceptions.SaltInvocationError('External command "openssl" not found') + crltext = _text_or_file(crl) crltext = get_pem_entry(crltext, pem_type='X509 CRL') crltempfile = tempfile.NamedTemporaryFile() @@ -1798,10 +1774,7 @@ def verify_crl(crl, cert): crltempfile.close() certtempfile.close() - if 'verify OK' in output: - return True - else: - return False + return 'verify OK' in output def expired(certificate): @@ -1838,8 +1811,9 @@ def expired(certificate): ret['expired'] = True else: ret['expired'] = False - except ValueError: - pass + except ValueError as err: + log.debug('Failed to get data of expired certificate: %s', err) + log.trace(err, exc_info=True) return ret @@ -1862,6 +1836,7 @@ def will_expire(certificate, days): salt '*' x509.will_expire "/etc/pki/mycert.crt" days=30 ''' + ts_pt = "%Y-%m-%d %H:%M:%S" ret = {} if os.path.isfile(certificate): @@ -1871,18 +1846,13 @@ def will_expire(certificate, days): cert = _get_certificate_obj(certificate) - _check_time = datetime.datetime.utcnow() + \ - datetime.timedelta(days=days) + _check_time = datetime.datetime.utcnow() + datetime.timedelta(days=days) _expiration_date = cert.get_not_after().get_datetime() ret['cn'] = _parse_subject(cert.get_subject())['CN'] - - if _expiration_date.strftime("%Y-%m-%d %H:%M:%S") <= \ - _check_time.strftime("%Y-%m-%d %H:%M:%S"): - ret['will_expire'] = True - else: - ret['will_expire'] = False - except ValueError: - pass + ret['will_expire'] = _expiration_date.strftime(ts_pt) <= _check_time.strftime(ts_pt) + except ValueError as err: + log.debug('Unable to return details of a sertificate expiration: %s', err) + log.trace(err, exc_info=True) return ret diff --git a/salt/states/x509.py b/salt/states/x509.py index 209cbc6738..8c79c6d034 100644 --- a/salt/states/x509.py +++ b/salt/states/x509.py @@ -163,6 +163,7 @@ import copy # Import Salt Libs import salt.exceptions +import salt.utils.stringutils # Import 3rd-party libs from salt.ext import six @@ -170,7 +171,7 @@ from salt.ext import six try: from M2Crypto.RSA import RSAError except ImportError: - pass + RSAError = Exception('RSA Error') def __virtual__(): @@ -180,7 +181,7 @@ def __virtual__(): if 'x509.get_pem_entry' in __salt__: return 'x509' else: - return (False, 'Could not load x509 state: m2crypto unavailable') + return False, 'Could not load x509 state: the x509 is not available' def _revoked_to_list(revs): @@ -267,7 +268,8 @@ def private_key_managed(name, new: Always create a new key. Defaults to False. - Combining new with :mod:`prereq <salt.states.requsities.preqreq>`, or when used as part of a `managed_private_key` can allow key rotation whenever a new certificiate is generated. + Combining new with :mod:`prereq <salt.states.requsities.preqreq>`, or when used as part of a + `managed_private_key` can allow key rotation whenever a new certificiate is generated. overwrite: Overwrite an existing private key if the provided passphrase cannot decrypt it. @@ -459,8 +461,10 @@ def certificate_managed(name, private_key_args['name'], pem_type='RSA PRIVATE KEY') else: new_private_key = True - private_key = __salt__['x509.create_private_key'](text=True, bits=private_key_args['bits'], passphrase=private_key_args[ - 'passphrase'], cipher=private_key_args['cipher'], verbose=private_key_args['verbose']) + private_key = __salt__['x509.create_private_key'](text=True, bits=private_key_args['bits'], + passphrase=private_key_args['passphrase'], + cipher=private_key_args['cipher'], + verbose=private_key_args['verbose']) kwargs['public_key'] = private_key @@ -671,8 +675,10 @@ def crl_managed(name, else: current = '{0} does not exist.'.format(name) - new_crl = __salt__['x509.create_crl'](text=True, signing_private_key=signing_private_key, signing_private_key_passphrase=signing_private_key_passphrase, - signing_cert=signing_cert, revoked=revoked, days_valid=days_valid, digest=digest, include_expired=include_expired) + new_crl = __salt__['x509.create_crl'](text=True, signing_private_key=signing_private_key, + signing_private_key_passphrase=signing_private_key_passphrase, + signing_cert=signing_cert, revoked=revoked, days_valid=days_valid, + digest=digest, include_expired=include_expired) new = __salt__['x509.read_crl'](crl=new_crl) new_comp = new.copy() @@ -714,6 +720,6 @@ def pem_managed(name, Any arguments supported by :py:func:`file.managed <salt.states.file.managed>` are supported. ''' file_args, kwargs = _get_file_args(name, **kwargs) - file_args['contents'] = __salt__['x509.get_pem_entry'](text=text) + file_args['contents'] = salt.utils.stringutils.to_str(__salt__['x509.get_pem_entry'](text=text)) return __states__['file.managed'](**file_args) -- 2.17.1 ++++++ xfs-do-not-fails-if-type-is-not-present.patch ++++++
From 769a18ffe6aac11dd10f33a607974d4e1dfca8fe Mon Sep 17 00:00:00 2001 From: Alberto Planas <aplanas@gmail.com> Date: Tue, 11 Jun 2019 17:21:05 +0200 Subject: [PATCH] xfs: do not fails if type is not present
The command `blkid -o export` not always provides a 'TYPE' output for all the devices. One example is non-formatted partitions, like for example the BIOS partition. This patch do not force the presence of this field in the blkid output. (cherry picked from commit 88df6963470007aa4fe2adb09f000311f48226a8) --- salt/modules/xfs.py | 2 +- tests/unit/modules/test_xfs.py | 50 ++++++++++++++++++++++++++++++++++ 2 files changed, 51 insertions(+), 1 deletion(-) create mode 100644 tests/unit/modules/test_xfs.py diff --git a/salt/modules/xfs.py b/salt/modules/xfs.py index 6546603ed6..e133ec83e1 100644 --- a/salt/modules/xfs.py +++ b/salt/modules/xfs.py @@ -329,7 +329,7 @@ def _blkid_output(out): for items in flt(dev_meta.strip().split("\n")): key, val = items.split("=", 1) dev[key.lower()] = val - if dev.pop("type") == "xfs": + if dev.pop("type", None) == "xfs": dev['label'] = dev.get('label') data[dev.pop("devname")] = dev diff --git a/tests/unit/modules/test_xfs.py b/tests/unit/modules/test_xfs.py new file mode 100644 index 0000000000..4b423d69d1 --- /dev/null +++ b/tests/unit/modules/test_xfs.py @@ -0,0 +1,50 @@ +# -*- coding: utf-8 -*- + +# Import Python libs +from __future__ import absolute_import, print_function, unicode_literals +import textwrap + +# Import Salt Testing Libs +from tests.support.mixins import LoaderModuleMockMixin +from tests.support.unit import skipIf, TestCase +from tests.support.mock import ( + NO_MOCK, + NO_MOCK_REASON, + MagicMock, + patch) + +# Import Salt Libs +import salt.modules.xfs as xfs + + +@skipIf(NO_MOCK, NO_MOCK_REASON) +@patch('salt.modules.xfs._get_mounts', MagicMock(return_value={})) +class XFSTestCase(TestCase, LoaderModuleMockMixin): + ''' + Test cases for salt.modules.xfs + ''' + def setup_loader_modules(self): + return {xfs: {}} + + def test__blkid_output(self): + ''' + Test xfs._blkid_output when there is data + ''' + blkid_export = textwrap.dedent(''' + DEVNAME=/dev/sda1 + UUID=XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX + TYPE=xfs + PARTUUID=YYYYYYYY-YY + + DEVNAME=/dev/sdb1 + PARTUUID=ZZZZZZZZ-ZZZZ-ZZZZ-ZZZZ-ZZZZZZZZZZZZ + ''') + # We expect to find only data from /dev/sda1, nothig from + # /dev/sdb1 + self.assertEqual(xfs._blkid_output(blkid_export), { + '/dev/sda1': { + 'label': None, + 'partuuid': 'YYYYYYYY-YY', + 'uuid': 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX' + } + }) -- 2.23.0
participants (1)
-
root