I’m trying to set up an Uyuni server to replace a Spacewalk server,
starting with setting up support for Ubuntu 20.04. Ideally we would like to
be able to use a vSphere VM template that we can use to generate/clone new
VMs, and then run a script on that new VM to customize it with the specific
desired hostname, AD domain registration, and Uyuni registration. For the
Uyuni registration, I started with the generated bootstrap script and
customized it with an Activation Key and the Ubuntu GPG keys
The registration script appears to work. If I look in Uyuni’s Salt->Keys
page, I see the key, can approve it and the system shows up in the system
list…. the first time.
On subsequent VMs however, I see the key in the Salt->Keys page, can
approve them, and then after some time I only see one of the two VMs in the
system list, usually the last one added.
While setting up the Uyuni server and VM template, it took me a while to
figure out I was supposed to use the modified bootstrap script, so I had
first tried to install salt packages on the template and thought that might
be the problem. I took a hint from the bootstrap script and tried to run
apt-get purge salt-minion
apt-get purge salt-common
rm -rf /etc/salt/minion.d/
on the template to clear any salt state, cleared systems and keys on the
Uyuni server, and started over creating new VMs,… with the same result.
Any suggestions on what could be going wrong?
Paul-Andre Panon, B.Sc.
Senior Systems Administrator
Video Security & Analytics
*o:* +1.604.629.5182 ext 2190
*For more information on how and why we collect your personal
Our Uyuni infra is quite little with 90 registered clients and increasing, however since some days we notice like a starvation of taskomatic which don't seem
to be able to launch SSHPush tasks and check Uyuni client status. All the registered servers are consequently shown as inactive ("System not checking in with Uyuni").
Important perhaps to mention that "push via SSH" is the registration method used for all the registered servers ( = no Salt minion installed).
The only one servers not affected by this issue are the Uyuni proxies which are registered with salt-minion.
We can repetitively see in the taskomatik logs this:
2020-11-22 10:21:00,500 [DefaultQuartzScheduler_Worker-88] WARN com.redhat.rhn.taskomatic.task.SSHPush - Maximum number of workers already put ... skipping.
2020-11-22 10:22:00,327 [DefaultQuartzScheduler_Worker-22] WARN com.redhat.rhn.taskomatic.task.SSHPush - Maximum number of workers already put ... skipping.
2020-11-22 10:23:00,173 [DefaultQuartzScheduler_Worker-92] WARN com.redhat.rhn.taskomatic.task.SSHPush - Maximum number of workers already put ... skipping.
No more info in any logs, and repositories sync is still working without issue though. Seems so that the SSHPush tasks queue is blocked at some point ?
A restart of taskomatic temporarily solve the issue.
I have tried to follow this https://www.uyuni-project.org/uyuni-docs/uyuni/large-deployments/tuning.html,
but well, from what mentioned the doc, it should not be necessary in my case ("Tuning is not required on installations of fewer than 1000 clients. Do not perform these instructions on small or medium scale installations."), and no luck with it anyway.
Any ideas ? Somebody who experienced this same issue ?
Philippe Bidault | Unix Engineer | Getronics
M. 34617301667 | E. Philippe.Bidault(a)Getronics.com | W. www.getronics.com<https://www.getronics.com>
Getronics CMC Service Desk Iberia S.L - VAT No:S.L.: B66686262.
Registered Office - Getronics CMC Service Desk Iberia S.L, C/Rosselloi, Porcel, 21 planta 11, 08016 Barcelona, Spain.
The information transmitted is intended only for use by the addressee and may contain confidential and/or privileged material. Any review, re-transmission, dissemination or other use of it, or the taking of any action in reliance upon this information by persons and/or entities other than the intended recipient is prohibited. If you received this in error, please inform the sender and/or addressee immediately and delete the material. Thank you.
Legal disclaimer: http://www.getronics.com/legal/
I have some problems to bootstap the uyuni salt-minion to one of our webserver because there is a conflict with zeromq. The centos machine uses php74-php-pecl-zmq, which needs libzmq.so.5. If I try to install the salt-minion:
Error: Package: python-zmq-14.5.0-3.9.uyuni.x86_64 (SUSE-Manager-Bootstrap)
Available: zeromq-4.0.5-1.9.uyuni.x86_64 (systemsmanagement_Uyuni_Stable_CentOS7-Uyuni-Client-Tools)
Installed: zeromq-4.1.4-6.el7.x86_64 (@epel)
How can I fix this?
is syncing a CentOS 8 kickstart tree (or syncing kickstart trees in general) supported on SUSE Leap 15.2 with Uyuni master?
I could not find the reference to the kickstart sync function in the documentation.
Running reposync on http://msync.centos.org/centos/8/BaseOS/x86_64/os/ gives me this error:
# /usr/bin/spacewalk-repo-sync --channel centos8-baseos-x86_64 --type yum --non-interactive --sync-kickstart
08:09:43 | Channel: centos8-baseos-x86_64
08:09:43 Sync of channel started.
Metadaten von Repository 'centos8-baseos-x86_64' abrufen ..................................................................................................................................................[fertig]
Cache für Repository 'centos8-baseos-x86_64' erzeugen .....................................................................................................................................................[fertig]
Alle Repositorys wurden aktualisiert.
08:09:46 Repo URL: http://msync.centos.org/centos/8/BaseOS/x86_64/os/
08:09:46 Packages in repo: 1697
08:09:51 No new packages to sync.
08:09:51 Patches in repo: 0.
08:09:52 Importing kickstarts.
08:09:52 Trying treeinfo
08:09:52 Unexpected error: <class 'AttributeError'>
08:09:52 Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/spacewalk/satellite_tools/repo_plugins/yum_src.py", line 1175, in get_file
downloaded = urlgrabber.urlgrab(path, temp_file)
File "/usr/lib/python3.6/site-packages/urlgrabber/grabber.py", line 787, in urlgrab
return default_grabber.urlgrab(url, filename, **kwargs)
File "/usr/lib/python3.6/site-packages/urlgrabber/grabber.py", line 1203, in urlgrab
urlgrabber.grabber.URLGrabError: [Errno 2] Local file does not exist: /root/treeinfo
Checking the variables path and temp_file variables give me this (just before calling urlgrabber.urlgrab):
What is path supposed to be?
Thank you and best wishes,
I'm trying to import the repositories for Debian 10.
Importing the main repo, security and updates succeeds, but syncing the client tools generates the following error:
2021/01/15 09:09:11 +02:00 Command: ['/usr/bin/spacewalk-repo-sync', '--channel', 'debian-10-amd64-uyuni-client', '--type', 'deb', '--non-interactive']
2021/01/15 09:09:11 +02:00 Sync of channel started.
2021/01/15 09:09:12 +02:00 Repo URL: https://download.opensuse.org/repositories/systemsmanagement:/Uyuni:/Stable…
2021/01/15 09:09:12 +02:00 Packages in repo: 15
2021/01/15 09:09:12 +02:00 Packages already synced: 0
2021/01/15 09:09:12 +02:00 Packages to sync: 15
2021/01/15 09:09:12 +02:00 New packages to download: 0
2021/01/15 09:09:12 +02:00 Downloading packages:
2021/01/15 09:09:12 +02:00 Importing packages started.
2021/01/15 09:09:12 +02:00
2021/01/15 09:09:12 +02:00 Importing packages to DB:
2021/01/15 09:09:12 +02:00 Package batch #1 of 1 completed...
2021/01/15 09:09:12 +02:00 Importing packages finished.
2021/01/15 09:09:12 +02:00
2021/01/15 09:09:12 +02:00 Linking packages to the channel.
2021/01/15 09:09:12 +02:00 Unexpected error: <class 'TypeError'>
2021/01/15 09:09:12 +02:00 Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/spacewalk/server/rhnSQL/driver_postgresql.py", line 85, in __call__
ret = self.cursor.execute(query, args)
psycopg2.errors.UniqueViolation: duplicate key value violates unique constraint "rhn_cnp_cid_nid_uq"
DETAIL: Key (channel_id, name_id, package_arch_id)=(154, 1349, 134) already exists.
CONTEXT: SQL statement "insert into rhnChannelNewestPackage
(channel_id, name_id, evr_id, package_id, package_arch_id)
where channel_id = channel_id_in
and (package_name_id_in is null
or name_id = package_name_id_in)
PL/pgSQL function rhn_channel.refresh_newest_package(numeric,character varying,numeric) line 9 at SQL statement
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/spacewalk/server/importlib/backend.py", line 2072, in update_newest_package_cache
refresh_newest_package(channel_id, caller, None)
File "/usr/lib/python3.6/site-packages/spacewalk/server/rhnSQL/driver_postgresql.py", line 116, in __call__
result = Function.__call__(self, *args)
File "/usr/lib/python3.6/site-packages/spacewalk/server/rhnSQL/driver_postgresql.py", line 92, in __call__
raise sql_base.SQLSchemaError(error_code, e.pgerror, e)
spacewalk.server.rhnSQL.sql_base.SQLSchemaError: (99999, 'ERROR: duplicate key value violates unique constraint "rhn_cnp_cid_nid_uq"', 'DETAIL: Key (channel_id, name_id, package_arch_id)=(154, 1349, 134) already exists.\nCONTEXT: SQL statement "insert into rhnChannelNewestPackage\n (channel_id, name_id, evr_id, package_id, package_arch_id)\n (select channel_id,\n name_id, evr_id,\n package_id, package_arch_id\n from rhnChannelNewestPackageView\n where channel_id = channel_id_in\n and (package_name_id_in is null\n or name_id = package_name_id_in)\n )"\nPL/pgSQL function rhn_channel.refresh_newest_package(numeric,character varying,numeric) line 9 at SQL statement\n', UniqueViolation('duplicate key value violates unique constraint "rhn_cnp_cid_nid_uq"\nDETAIL: Key (channel_id, name_id, package_arch_id)=(154, 1349, 134) already exists.\nCONTEXT: SQL statement "insert into rhnChannelNewestPackage\n (channel_id, name_id, evr_id, package_id, package_arch_id)\n (select channel_id,\n name_id, evr_id,\n package_id, package_arch_id\n from rhnChannelNewestPackageView\n where channel_id = channel_id_in\n and (package_name_id_in is null\n or name_id = package_name_id_in)\n )"\nPL/pgSQL function rhn_channel.refresh_newest_package(numeric,character varying,numeric) line 9 at SQL statement\n',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/spacewalk/satellite_tools/reposync.py", line 592, in sync
ret = self.import_packages(plugin, data['id'], url, is_non_local_repo)
File "/usr/lib/python3.6/site-packages/spacewalk/satellite_tools/reposync.py", line 1107, in import_packages
File "/usr/lib/python3.6/site-packages/spacewalk/server/importlib/importLib.py", line 777, in run
File "/usr/lib/python3.6/site-packages/spacewalk/server/importlib/packageImport.py", line 142, in submit
File "/usr/lib/python3.6/site-packages/spacewalk/server/importlib/backend.py", line 2075, in update_newest_package_cache
raise_with_tb(rhnFault(23, str(e), explain=0), sys.exc_info())
TypeError: 'SQLSchemaError' object does not support indexing
My server is Uyuni 2020.11 with updates from today.
btw: why are there two parent channels for Debian 10?
- Debian 10 (buster) pool for amd64
- Debian 10 (buster) pool for amd64 for Uyuni (this is the one with the uyuni client tools)
Our uyuni installation should patch centos and oracle machines, so I have both kinds of machines.
Some of them are using docker packages, so I configured a centos-docker repo, connected to a centos-docker channel.
Now I need a oracleos docker channel, with uses the same repo. So I created a oracle channel, connected to the centos-repo, because this repo can be used for both operating systems. In the centos channel I see docker packages, but in the oracle channel I see no package.
What is wrong, or how to I connect one repo to several channels?
The following situation. I want to bootstrap a server with an underscore in his hostname. The host can not by found, but this must be wrong. Nslookup, ssh connects to this host are possible.
Is this a bug, or why can I not use underscores in hostnames?
I want to bootstrap a client via private / public-key, but I got an error message:
Load key "/srv/susemanager/salt/salt_ssh/temp_bootstrap_keys/boostrapKeyTmp-0b697b75-4f8c-4df3-b4eb-86a6b45d7ec8": invalid format
I guess this could be a CR/LF problem, because the private key is stored on a windows machine. When I copy the tmp file before it is deleted I can see, that there is a CR/LF and so it differs from the linux file.
Could this be the problem?
cat -e <temp_bootstrap_keys/boostrapKeyTmp-0b697b75-4f8c-4df3-b4eb-86a6b45d7ec8
-----BEGIN OPENSSH PRIVATE KEY-----^M$
cat -e <original file on linux
-----BEGIN OPENSSH PRIVATE KEY-----$
Is it possible to use a private key, stored somewhere on the uyuni server?
I'm about to create the appointments for the next months and I was wondering if last Friday of the month, 4pm CET is still the best time.
What are your thoughts? Would you prefer some other week of the month, day or time?
Pau Garcia Quiles
SUSE Manager Product Owner & Technical Project Manager
Phone: +34 91 048 7632
SUSE Software Solutions Spain