Hi Folk,
Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system.
Please advise!
'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls)
Steps I have done:
* Changed repos and upgraded base OS to Leap 15.3
* Upgraded database as per https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-migration-13.h…
* Changed the Uyuni repo.
* Normal Uyuni upgrade.
All package issues seemingly resolved, apart from this:
Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch
Solution 1: Following actions will be done:
deinstallation of apache-commons-el-1.0-bp153.2.24.noarch
deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch
deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch
deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch
deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64
deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64
deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch
deinstallation of uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch
Solution 2: do not install patch:SUSE-2020-3767-1.noarch
(Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update)
The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.)
24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36]
24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn]
24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn]
java.lang.IllegalStateException: Error starting child
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:720)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1132)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1865)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118)
at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1044)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:429)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1575)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:309)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423)
at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:936)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:841)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:421)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.startup.Catalina.start(Catalina.java:633)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474)
Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/rhn]]
at org.apache.catalina.util.LifecycleBase.handleSubClassException(LifecycleBase.java:440)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:717)
... 37 more
Caused by: java.lang.NullPointerException
at org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarScanner.java:382)
at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScanner.java:195)
at org.apache.catalina.startup.ContextConfig.processJarsForWebFragments(ContextConfig.java:1971)
at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1129)
at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:775)
at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:301)
at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5044)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
... 38 more
24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms
24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"]
24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"]
24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"]
24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds
Simon Avery
Linux Systems Administrator
We are happy to announce the immediate availability of Uyuni 2021.06
At https://www.uyuni-project.org/pages/stable-version.html you will find all
the resources you need to start working with Uyuni 2021.06, including the
release notes, documentation, requirements and setup instructions.
VERY IMPORTANT: Read the release notes. Uyuni 2021.06 will change the base OS
for the Server and Proxy to openSUSE Leap 15.3 and some manual special steps
are required. The upgrade is only possible if you are on 2020.07 or newer.
This is the list of highlights for this release:
* Salt 3002
* Base System Upgrade
* PostgreSQL 13
* Missing openSUSE Leap 15.3 channels added to spacewalk-common-channels
* Integration of Ansible into an Uyuni automation environment to protect
customer investment and ease migration (Technology Preview)
Please check the release notes for full details.
Remember that Uyuni will follow a rolling release planning, so the next
version will contain bugfixes for this one and any new features. There will be
no maintenance of 2021.06
As always, we hope you will enjoy Uyuni 2021.06 and we invite everyone of you
to send us your feedback [1] and of course your patches, if you can
contribute.
Happy hacking!
[1] https://www.uyuni-project.org/pages/contact.html
--
Julio González Gil
Release Engineer, SUSE Manager and Uyuni
jgonzalez(a)suse.com
I have been using the firewalld state module for SLES 15 successfully, but I'm now trying to write a state for SLES 12, which uses the older SuSEfirewall2. Is the salt.states.iptables module the correct approach for this?
Wondering if directly inserting iptables rules with that is going to cause any issues if someone opens the yast firewall module later?
Allen B.
--
Allen Beddingfield
Systems Engineer
Office of Information Technology
The University of Alabama
Office 205-348-2251
allen(a)ua.edu
Hello
I've hit an issue since migrating a few machines from Centos 8.4 to Rocky 8.4. Still using the C8 Uyuni Tools.
Remote-commands work fine, but I'm unable to change Software Channels on the amended machines, which were fine before.
/var/log/salt/minion reports this
salt.exceptions.SaltRenderError: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'pkg.version'
Re-bootstrapping the machines doesn't help.
Is this due to the old version of salt-minion that's been mentioned in recent community hours, and is the upcoming release likely to fix it?
Thanks
Uyuni Current: <https://ata-oxy-uyuni01.atass.com/docs/en/release-notes/release-notes-serve…> 2021.05
Simon Avery
Linux Systems Administrator
Hi,
I've just used:
`spacewalk-common-channels -u admin -p pass -k unlimited -a x86_64
'opensuse_leap15_3*'`
to create the Leap 15.3 channels in Uyuni (2021.05). However if I
compare to a Leap 15.3 client it did not import these repos:
repo-backports-update -
http://download.opensuse.org/update/leap/15.3/backports/
repo-sle-update - http://download.opensuse.org/update/leap/15.3/sle/
Is that intentional or should I create these channels/repos manually?
Thanks
David
I have created a state channel, with an init.sls entered that deploys a package, and pulls in a couple of files that are attached to the state channel.
If I add that state channel to a system, and apply the highstate to that system, it works.
However, what I need is a way to apply the highstate to a group of systems.
What I want to do is to attach this to about 100 systems, and have it deploy - without having to individually go to each system and click "apply high state".
Surely there is a way to do this? With config channels, I would deploy files by adding all the systems, selecting the files to deploy, and deploying them to all systems. I don't see any option from the state channel to apply it to all systems.
Am I missing something obvious?
Thanks.
Allen B.
--
Allen Beddingfield
Systems Engineer
Office of Information Technology
The University of Alabama
Office 205-348-2251
allen(a)ua.edu
Hello,
Just FYI, while autoinstalling almalinux8 hosts in Uyuni 2021.05 I get
the following warnings:
Jun 15 06:52:45 localhost dracut-cmdline[346]: Warning:
'ks=http://X.X.X.X/cblr/svc/op/autoinstall/profile/almalinux8-x86_64:1:Organization'
Jun 15 06:52:45 localhost dracut-cmdline[346]: Warning: ks has been
deprecated. All usage of Anaconda boot arguments without 'inst.' prefix
have been deprecated and will be removed in a future major release.
Please use inst.ks instead.
Jun 15 06:52:45 localhost dracut-cmdline[346]: Warning: 'kssendmac='
Jun 15 06:52:45 localhost dracut-cmdline[346]: Warning: kssendmac has
been deprecated. All usage of Anaconda boot arguments without 'inst.'
prefix have been deprecated and will be removed in a future major
release. Please use inst.ks.sendmac instead.
best regards,
Jordi
Hello,
The recent Centos 8.4 release appears to have triggered more issues for us, and made me review how we manage our Centos 8 clients. We plan to move to Rocky soon but the same issue will persist there.
Currently: Uyuni syncs Centos repos. My workaround so far has been to run "dnf -y update" on a schedule to each client instead of patching from within Uyuni as we do with Centos7. This pulls packages from the Uyuni mirror and has largely worked okay, but not any longer - lots of module related issues on all C8 clients since I updated the repos to 8.4. I'm not clear exactly why this has triggered this problem re-appearing, but it has. My understanding of this is that Uyuni doesn't update the module metadata when it populates its repositories, so the clients can't see this and fail.
I've read about the Uyuni method of using the Content Lifecycle and have trialled this. This does work, but we don' t particularly want to be manually doing this for each Centos update or patch cycle. We're not big enough to warrant a corporate approval cycle for Centos, so updates are directly applied to our servers. (This has resulted in relatively few issues)
The only other method I can think of is to change the clients to having local .repo files and pull updates directly from the Centos mirror. This obviously negates some of the benefits Uyuni brings to package management, but would work reliably.
So I'm wondering - how are the other Centos/Alma/Oracle/Rocky 8 users of Uyuni applying updates, and how have you overcome the module problems?
Simon Avery
Linux Systems Administrator
________________________________________
From: Pau Garcia <pau.garcia(a)suse.com>
Sent: Wednesday, June 9, 2021 1:46:18 PM (UTC+01:00) Brussels, Copenhagen, Madrid, Paris
To: users(a)lists.uyuni-project.org; devel(a)lists.uyuni-project.org; announce(a)lists.uyuni-project.org
Cc: Victor Zhestkov; Stefan Bluhm; Karl Tarvas; Dan Čermák; Bidault, Philippe; Richard Schaertel
Subject: Uyuni Community Hours
When: Friday, June 25, 2021 4:00 PM-5:00 PM.
Where:
Join us one more month to learn and discuss about Uyuni!
The latest developments around Uyuni are presented and the community can ask question, provide feedback, request or suggest features or enhancements, etc to the Product Owner and members of the development team.
Agenda will be published in the Uyuni wiki:
https://github.com/uyuni-project/uyuni/wiki/Uyuni-Community-Hours
NB: Uyuni Community Hours are held every last Friday of the month, at 4pm CET. Meetings are recorded and published in the Uyuni YouTube channel:
https://www.youtube.com/channel/UCB0SkZFAw9vPCFeUIYqZQ5A
________________________________________________________________________________
Microsoft Teams meeting
Join on your computer or mobile app
Click here to join the meeting<https://teams.microsoft.com/l/meetup-join/19%3ameeting_MjNhOWQzOTktZTk0MS00…>
Or call in (audio only)
+34 917 94 59 77,,418209063#<tel:+34917945977,,418209063#> Spain, Madrid
Phone Conference ID: 418 209 063#
Find a local number<https://dialin.teams.microsoft.com/58c3986f-4c3a-4b77-b4e8-f431171ef66d?id=…> | Reset PIN<https://mysettings.lync.com/pstnconferencing>
Learn More<https://aka.ms/JoinTeamsMeeting> | Meeting options<https://teams.microsoft.com/meetingOptions/?organizerId=83d1ce03-a2c8-4530-…>
________________________________________________________________________________