Hi all,
Not sure if I am alone with this, but I am still stuck with this and have opened an issue:
https://github.com/uyuni-project/uyuni/issues/3182
Philippe.
-----Original Message-----
From: Bidault, Philippe
Sent: domingo, 22 de noviembre de 2020 16:09
To: Julio González Gil ; users@lists.uyuni-project.org
Subject: RE: Uyuni starvation, not checking registered client
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
Uyuni 2020.09 installed, but without the last CVE patch:
# zypper info salt-master
Loading repository data...
Warning: Repository 'Main Update Repository' appears to be outdated. Consider using a different mirror or server.
Reading installed packages...
Information for package salt-master:
------------------------------------
Repository : Main Update Repository
Name : salt-master
Version : 3000-lp152.3.9.1
Arch : x86_64
Vendor : openSUSE
Installed Size : 3.0 MiB
Installed : Yes (automatically)
Status : up-to-date
Source package : salt-3000-lp152.3.9.1.src
Summary : The management component of Saltstack with zmq protocol supported
Description :
The Salt master is the central server to which all minions connect.
Enabled commands to remote systems to be called in parallel rather
than serially.
uyuni:~ # rpm -q --changelog salt | head -50
* Wed Aug 12 2020 Pablo Suárez Hernández
- Require /usr/bin/python instead of /bin/python for RHEL-family (bsc#1173936)
- Don't install SuSEfirewall2 service files in Factory
- Fix __mount_device wrapper to accept separate args and kwargs
- Fix the registration of libvirt pool and nodedev events
- Accept nested namespaces in spacewalk.api runner function. (bsc#1172211)
- info_installed works without status attr now (bsc#1171461)
- Added:
* info_installed-works-without-status-attr-now.patch
* fix-__mount_device-wrapper-253.patch
* opensuse-3000-libvirt-engine-fixes-248.patch
* opensuse-3000-spacewalk-runner-parse-command-247.patch
* Thu Jul 16 2020 Jochen Breuer
- Fix for TypeError in Tornado importer (bsc#1174165)
- Added:
* fix-type-error-in-tornadoimporter.patch
* Thu Jun 18 2020 Pablo Suárez Hernández
- Require python3-distro only for TW (bsc#1173072)
* Thu Jun 11 2020 Pablo Suárez Hernández
- Various virt backports from 3000.2
- Added:
* opensuse-3000.2-virt-backports-236.patch
* Mon Jun 08 2020 Pablo Suárez Hernández
- Avoid traceback on debug logging for swarm module (bsc#1172075)
- Add publish_batch to ClearFuncs exposed methods
- zypperpkg: filter patterns that start with dot (bsc#1171906)
- Batch mode now also correctly provides return value (bsc#1168340)
- Add docker.logout to docker execution module (bsc#1165572)
- Testsuite fix
- Add option to enable/disable force refresh for zypper
- Python3.8 compatibility changes
- Prevent sporious "salt-api" stuck processes when managing SSH minions because of logging deadlock (bsc#1159284)
- Avoid segfault from "salt-api" under certain conditions of heavy load managing SSH minions (bsc#1169604)
- Revert broken changes to slspath made on Salt 3000 (saltstack/salt#56341) (bsc#1170104)
- Returns a the list of IPs filtered by the optional network list
- Added:
* option-to-en-disable-force-refresh-in-zypper-215.patch
* zypperpkg-filter-patterns-that-start-with-dot-243.patch
* prevent-logging-deadlock-on-salt-api-subprocesses-bs.patch
* revert-changes-to-slspath-saltstack-salt-56341.patch
* fix-for-return-value-ret-vs-return-in-batch-mode.patch
* add-docker-logout-237.patch
* add-ip-filtering-by-network.patch
* make-lazyloader.__init__-call-to-_refresh_file_mappi.patch
* add-publish_batch-to-clearfuncs-exposed-methods.patch
* python3.8-compatibility-pr-s-235.patch
Philipppe.
Philippe Bidault | Unix Engineer | Getronics
M. 34617301667 | E. Philippe.Bidault@Getronics.com | W. www.getronics.com
Getronics CMC Service Desk Iberia S.L - VAT No:S.L.: B66686262.
Registered Office - Getronics CMC Service Desk Iberia S.L, C/Rosselloi, Porcel, 21 planta 11, 08016 Barcelona, Spain.
The information transmitted is intended only for use by the addressee and may contain confidential and/or privileged material. Any review, re-transmission, dissemination or other use of it, or the taking of any action in reliance upon this information by persons and/or entities other than the intended recipient is prohibited. If you received this in error, please inform the sender and/or addressee immediately and delete the material. Thank you.
Legal disclaimer: http://www.getronics.com/legal/ -----Original Message-----
From: Julio González Gil
Sent: domingo, 22 de noviembre de 2020 14:53
To: users@lists.uyuni-project.org
Cc: Bidault, Philippe
Subject: Re: Uyuni starvation, not checking registered client
Looks strange indeed. 90 clients is not a big deployment.
Are you on 2020.09 + the patch for the CVEs that we announced one week ago?
Can you provide the output of `zypper info salt-master` and `rpm -q -- changelog salt|head -n 50`?
On domingo, 22 de noviembre de 2020 12:04:31 (CET) Bidault, Philippe wrote:
Hi all,
Our Uyuni infra is quite little with 90 registered clients and
increasing, however since some days we notice like a starvation of
taskomatic which don't seem to be able to launch SSHPush tasks and
check Uyuni client status. All the registered servers are consequently
shown as inactive ("System not checking in with Uyuni"). Important
perhaps to mention that "push via SSH" is the registration method used
for all the registered servers ( = no Salt minion installed). The only
one servers not affected by this issue are the Uyuni proxies which are registered with salt-minion.
We can repetitively see in the taskomatik logs this:
2020-11-22 10:21:00,500 [DefaultQuartzScheduler_Worker-88] WARN
com.redhat.rhn.taskomatic.task.SSHPush - Maximum number of workers
already put ... skipping. 2020-11-22 10:22:00,327
[DefaultQuartzScheduler_Worker-22] WARN
com.redhat.rhn.taskomatic.task.SSHPush - Maximum number of workers
already put ... skipping. 2020-11-22 10:23:00,173
[DefaultQuartzScheduler_Worker-92] WARN
com.redhat.rhn.taskomatic.task.SSHPush - Maximum number of workers
already put ... skipping.
No more info in any logs, and repositories sync is still working
without issue though. Seems so that the SSHPush tasks queue is blocked
at some point ? A restart of taskomatic temporarily solve the issue.
I have tried to follow this
https://www.uyuni-project.org/uyuni-docs/uyuni/large-deployments/tunin
g.htm l, but well, from what mentioned the doc, it should not be
necessary in my case ("Tuning is not required on installations of
fewer than 1000 clients.
Do not perform these instructions on small or medium scale
installations."), and no luck with it anyway.
Any ideas ? Somebody who experienced this same issue ?
Philippe.
Philippe Bidault | Unix Engineer | Getronics
________________________________ M. 34617301667 | E.
Philippe.Bidault@Getronics.com | W.
www.getronics.comhttps://www.getronics.com
[cid:3-lines_7512e235-163a-4853-8e4f-b0d4fc62ce73.png]<https://getroni
cs.com
/>
Getronics CMC Service Desk Iberia S.L - VAT No:S.L.: B66686262.
Registered Office - Getronics CMC Service Desk Iberia S.L,
C/Rosselloi, Porcel, 21 planta 11, 08016 Barcelona, Spain.
The information transmitted is intended only for use by the addressee
and may contain confidential and/or privileged material. Any review,
re-transmission, dissemination or other use of it, or the taking of
any action in reliance upon this information by persons and/or
entities other than the intended recipient is prohibited. If you
received this in error, please inform the sender and/or addressee
immediately and delete the material. Thank you. Legal disclaimer:
http://www.getronics.com/legal/
--
Julio González Gil
Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com _______________________________________________
Uyuni Users mailing list -- users@lists.uyuni-project.org To unsubscribe, email users-leave@lists.uyuni-project.org
List Netiquette: https://en.opensuse.org/openSUSE:Mailing_list_netiquette
List Archives: https://lists.opensuse.org/archives/list/users@lists.uyuni-project.org