We are happy to announce the immediate availability of Uyuni 2021.02
At https://www.uyuni-project.org/pages/stable-version.html you will find all
the resources you need to start working with Uyuni 2022.02, including the
release notes, documentation, requirements and setup instructions.
VERY IMPORTANT: Read the release notes! If you are updating from an Uyuni
version older than 2021.06, a major upgrade procedure is required.
This is the list of highlights for this release:
* Reporting Database
* Ubuntu errata installation
* Monitoring: Prometheus 2.32.1
* Monitoring: Postgres exporter updated to version 0.10.0 for SUSE Linux
Enterprise and openSUSE
* SLES PAYG client support on cloud
* openscap for Debian 11 (Tech Preview)
Please check the release notes for full details, and in particular review
the section about the Postgres exporter update and the Prometheus update, as
it could require manual if you are using monitoring.
Remember that Uyuni will follow a rolling release planning, so the next
version will contain bugfixes for this one and any new features. There will be
no maintenance of 2022.02
As always, we hope you will enjoy Uyuni 2022.02 and we invite everyone of you
to send us your feedback [1] and of course your patches, if you can
contribute.
Happy hacking!
[1] https://www.uyuni-project.org/pages/contact.html
--
Julio González Gil
Release Engineer, SUSE Manager and Uyuni
jgonzalez(a)suse.com
Hello!
Thank you for the CLM build java stack overflow fix, that was very fast response. I can confirm that CLM build succeed every time after installing the patch.
I had another minor problem regarding to Content Lifecyle filters. I have created several module stream filters to our test environment, but somehow I cannot add more filters to existing project. GUI allows me to pick additional filters (via Add/Detach filters) and there is no error message when I click save button. However nothing happens, it doesn't get saved, there was only 9 filters total in use.
I was forced to detach every single filter from project, then do CLM build without any module filtered and after that I was able to add every 12 filters at once. Only then all the filters came into use.
What indeed is correct way to deal with modules?
My intended purpose is that nothing changes from servers perspective when it gets bootstrapped to Uyuni. I want to use all the default packages that vanilla CentOS/Rocky/Alma is using after initial installation. I understand that Uyuni does not directly support RedHat-specific module-enabled repositories and whole (stupid) module stream concept, but you can deal with modules using module filters and building Content Lifecyle environment where module enabled repositories get "flattened" to ordinary repos, and you can alter the flattening process using module filters.
After initial Rocky/Alma/CentOS installation there is several module streams enabled, you can view it using "dnf module list" command. Then you bootstrap server to Uyuni and all the original repost get replaced in favor of custom channels from Uyuni's lifecycle environment. But... still "dnf module list" show that there is enabled module streams.
How should I deal with it? Should I inactivate all the streams using "dnf module reset" command? And what after that? Create module filter for all the different modules, or should I create filter only for those I want to use? What happens if I don't create any module filter at all?
I know that SUSE is not using whole module streams etc... so this is not directly related to Uyuni, but I hope anyway that some of you can guide me to right direction.
Br, Janne
Janne Karjanlahti (Mr.)
Vastaava järjestelmäasiantuntija | Senior Systems Specialist
ICT- ja digitaaliset palvelut | ICT and Digital Services
Satakunnan ammattikorkeakoulu | Satakunta University of Applied Sciences
Satakunnankatu 23 | 28130 | PORI | Finland
+358 44 710 3339
janne.karjanlahti(a)samk.fi<mailto:janne.karjanlahti@samk.fi>
www.samk.fi<http://www.samk.fi>
Occasionally I get a failure to apply updates or a failed to reboot and the reason given in the event summary is :
Minion is down or could not be contacted.
Retrying the command immediately afterwards almost invariably results in success - which leads me to believe that the minion is not down and this is a time-out issue and that the minion maybe takes a while to "wake up " if it has been inactive for a time
It's annoying.
Do others have this issue - and is there a setting which will allow the Uyuni server to grant more time to the minion to respond?
Many thanks
T
___________________________________________________________
TIM SHAW - MSD IT Systems & Network Services
Medical Sciences Division - University of Oxford
email : tim.shaw(a)medsci.ox.ac.uk
tel : +44 (0)1865 289480
Hello!
Yesterday Content Lifecycle Project build failed without any additional information. It was our test environment and it was never happened before. I (re)synced all source repos by hand (just in case) and after that build succeeded and all seemed to be ok. I was a bit hurry and I didn't find any errors in log files at first glance so I leave it...
This morning I encountered same "failed" notice in our production Uyuni server. This time I clicked "Build" again without additional repo resync and the second build succeeded.
However repository content seems to be somewhat messed up, in both Uyuni servers. Client servers found a lot of updates, mainly different Perl packages but couldn't update because of many conflicting dependencies. There is a lot of Perl 5.30 -version updates available even if I have not enabled this module stream. Default Perl version branch in Redhat EL8 derived distributions is 5.26 and additional versions is provided with module streams. Currently installed Perl version in my servers is 5.26. In fact there isn't any filters concerning Perl at all in this Content Lifecyle Project.
I found errors in /var/log/rhn/ rhn_web_ui.log -file corresponding the failed environment build:
2022-02-08 07:03:03,580 [RHN Message Dispatcher] ERROR com.redhat.rhn.frontend.events.TransactionHelper - com.redhat.rhn.frontend.events.AlignSoftwareTargetAction$AlignSoftwareTargetException: java.lang.StackOverflowError
2022-02-08 07:03:03,656 [RHN Message Dispatcher] ERROR com.redhat.rhn.frontend.events.AlignSoftwareTargetAction - Error aligning target 3
com.redhat.rhn.frontend.events.AlignSoftwareTargetAction$AlignSoftwareTargetException: java.lang.StackOverflowError
at com.redhat.rhn.frontend.events.AlignSoftwareTargetAction.execute(AlignSoftwareTargetAction.java:71)
at com.redhat.rhn.common.messaging.ActionExecutor.lambda$run$0(ActionExecutor.java:67)
at com.redhat.rhn.frontend.events.TransactionHelper.run(TransactionHelper.java:63)
at com.redhat.rhn.frontend.events.TransactionHelper.handlingTransaction(TransactionHelper.java:47)
at com.redhat.rhn.common.messaging.ActionExecutor.run(ActionExecutor.java:67)
at com.redhat.rhn.common.messaging.MessageDispatcher.run(MessageDispatcher.java:91)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.StackOverflowError
at java.base/java.util.function.Predicate.lambda$or$2(Predicate.java:101)
at java.base/java.util.function.Predicate.lambda$or$2(Predicate.java:101)
at java.base/java.util.function.Predicate.lambda$or$2(Predicate.java:101)
Last message continues about thousand lines.
Is there any other log files I should check?
We are using openSUSE Leap 15.3 and Uyuni 2022.01 version.
All the client servers and repositories/channels are Rocky Linux 8.
I think we must ditch current Lifecycle Environment and build totally new for our Rocky Linux servers. I wonder however is this a bug or is there something we should do differently this time to avoid this kind of errors?
Br, Janne
Janne Karjanlahti (Mr.)
Vastaava järjestelmäasiantuntija | Senior Systems Specialist
ICT- ja digitaaliset palvelut | ICT and Digital Services
Satakunnan ammattikorkeakoulu | Satakunta University of Applied Sciences
Satakunnankatu 23 | 28130 | PORI | Finland
+358 44 710 3339
janne.karjanlahti(a)samk.fi<mailto:janne.karjanlahti@samk.fi>
www.samk.fi<http://www.samk.fi>
Dear Uyuni Community.
In order to support the growing number of distributions that use modular
repositories, the Content Lifecycle Management feature had some updates that
cause stack overflow exceptions that could fail the builds, and showed the
stack overflow errors at `rhn_web_ui.log`.
A bugfix to offload the work from the Java stack to prevent such overflows is
now released.
See how to apply the patch at https://uyuni-project.org/pages/patches.html.
The fix will be part of Uyuni 2022.02 as well, but we recommend you
don't wait and apply the patch as soon as possible, in particular if you are
already using Content Lifecycle Management.
Best regards.
--
Julio González Gil
Release Engineer, SUSE Manager and Uyuni
jgonzalez(a)suse.com
Hello list
I make first tests with the configuration in Uyuni. It worked to create
a configuration channel, assign it to a server and copy a file 1:1 to
this server.
What I can't do yet is to add dynamic values of the target system to
this file. Actually I thought it works like in the manual (content
configuration file):
# My test file
hostname={| rhn.system.hostname |}
ip_address={| rhn.system.net_interface.ip_address(eth0) |}
Unfortunately, this is exactly what the file looked like on the target
system:
# My test file
hostname={| rhn.system.hostname |}
ip_address={| rhn.system.net_interface.ip_address(eth0) |}
and not as in the manual:
# My test file
hostname=myserver.mydomain.tld
ip_address=192.168.5.31
This is probably because my openSuSE client is not "Traditional" but
connected with "Salt".
Even if I try it with python the python code is copied 1:1 to the target
system and not interpreted.
Can someone give me a working example of how I would have to create this
to achieve the desired goal - after that I could certainly build on it.
Btw: Now that openSuSE 15.3 reports itself as SLES15 - is there still no
traditional client for OpenSuSE? :)
Best regards
Martin