Unable to restart Uyuni following Uyuni upgrade 2021-06
Hi Folk, Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system. Please advise! 'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls) Steps I have done: * Changed repos and upgraded base OS to Leap 15.3 * Upgraded database as per https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-migration-13.ht... * Changed the Uyuni repo. * Normal Uyuni upgrade. All package issues seemingly resolved, apart from this: Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch Solution 1: Following actions will be done: deinstallation of apache-commons-el-1.0-bp153.2.24.noarch deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64 deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64 deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch deinstallation of uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch Solution 2: do not install patch:SUSE-2020-3767-1.noarch (Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update) The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.) 24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36] 24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn] 24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn] java.lang.IllegalStateException: Error starting child at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:720) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1132) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1865) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:118) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1044) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:429) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1575) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:309) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123) at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java:423) at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:936) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:841) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1384) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1374) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:140) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909) at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:262) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardService.startInternal(StandardService.java:421) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:930) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.startup.Catalina.start(Catalina.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474) Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/rhn]] at org.apache.catalina.util.LifecycleBase.handleSubClassException(LifecycleBase.java:440) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:717) ... 37 more Caused by: java.lang.NullPointerException at org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarScanner.java:382) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScanner.java:195) at org.apache.catalina.startup.ContextConfig.processJarsForWebFragments(ContextConfig.java:1971) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1129) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:775) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:301) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:123) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5044) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ... 38 more 24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms 24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"] 24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"] 24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"] 24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds Simon Avery Linux Systems Administrator
On jueves, 24 de junio de 2021 14:20:41 (CEST) Simon Avery wrote:
Hi Folk,
Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system.
Please advise!
'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls)
Steps I have done:
* Changed repos and upgraded base OS to Leap 15.3
* Upgraded database as per https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-migration-13.h tml * Changed the Uyuni repo. * Normal Uyuni upgrade.
Wrong steps. https://www.uyuni-project.org/doc/2021.06/release-notes-uyuni-server.html
Upgrade notes WARNING: Check "Update from previous versions of Uyuni Server" section below for details, as this release updates the base OS from openSUSE Leap 15.2 to openSUSE Leap 15.3, and there are special steps required. You need at least Uyuni 2020.07 already installed to perform the upgrade.
Update from previous versions of Uyuni Server WARNING: Make sure you check the documentation this time. Because of the change from openSUSE Leap 15.2 to openSUSE Leap 15.3, some special steps are required! WARNING: This applies not only when updating from 2021.05, but also when updating from any version after 2020.07 (included). Updating from 2020.06 and older is not supported anymore. See the "Upgrade Guide" for detailed instructions on how to upgrade. You will need to follow the "Upgrade the Server" > "Server - Major Upgrade"
Then if you go to that section... section.
All connected clients will continue to run and are manageable unchanged
And the doc: https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/server-major-upgra... *Maybe* you can fix the issues by doing the procedure again, EXCEPT the call to /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh (step 3) as you already did that. That should work, but if you have a backup, it's better if you restore it and start the upgrade again. TBH, I wonder if we should not just remove the "Upgrade the Database" and integrate it with the the "Server - Major Upgrade" section. Joseph any opinion?
All package issues seemingly resolved, apart from this:
Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch Solution 1: Following actions will be done: deinstallation of apache-commons-el-1.0-bp153.2.24.noarch deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64 deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64 deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch deinstallation of uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch Solution 2: do not install patch:SUSE-2020-3767-1.noarch
(Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update)
The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.)
24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36] 24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn] 24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn] java.lang.IllegalStateException: Error starting child at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java: 720) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1132 ) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java: 1865) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.jav a:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecu torService.java:118) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:10 44) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:429) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1575) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:309) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.jav a:123) at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java: 423) at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:936 ) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:841) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1 384) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1 374) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(AbstractExecu torService.java:140) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:909 ) at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:2 62) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardService.startInternal(StandardService.java :421) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:9 30) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.startup.Catalina.start(Catalina.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethod AccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Delegati ngMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474) Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/rhn]] at org.apache.catalina.util.LifecycleBase.handleSubClassException(LifecycleBas e.java:440) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java: 717) ... 37 more Caused by: java.lang.NullPointerException at org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarScanner.j ava:382) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScanner.java :195) at org.apache.catalina.startup.ContextConfig.processJarsForWebFragments(Contex tConfig.java:1971) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1129 ) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java :775) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java :301) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.jav a:123) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java :5044) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ... 38 more 24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms 24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"] 24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"] 24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"] 24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds
Simon Avery Linux Systems Administrator
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
Thanks, Julio. It looks like my mistake was following the minor-upgrade steps, then going off piste with manual steps. I've now rolled back to the starting place, and will have another stab at it tomorrow using the link you provided. S -----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 24 June 2021 13:31 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk>; Joseph Cayouette <JCayouette@suse.com> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On jueves, 24 de junio de 2021 14:20:41 (CEST) Simon Avery wrote:
Hi Folk,
Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system.
Please advise!
'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls)
Steps I have done:
* Changed repos and upgraded base OS to Leap 15.3
* Upgraded database as per https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-migration -13.h tml * Changed the Uyuni repo. * Normal Uyuni upgrade.
Wrong steps. https://www.uyuni-project.org/doc/2021.06/release-notes-uyuni-server.html
Upgrade notes WARNING: Check "Update from previous versions of Uyuni Server" section below for details, as this release updates the base OS from openSUSE Leap 15.2 to openSUSE Leap 15.3, and there are special steps required. You need at least Uyuni 2020.07 already installed to perform the upgrade.
Update from previous versions of Uyuni Server WARNING: Make sure you check the documentation this time. Because of the change from openSUSE Leap 15.2 to openSUSE Leap 15.3, some special steps are required! WARNING: This applies not only when updating from 2021.05, but also when updating from any version after 2020.07 (included). Updating from 2020.06 and older is not supported anymore. See the "Upgrade Guide" for detailed instructions on how to upgrade. You will need to follow the "Upgrade the Server" > "Server - Major Upgrade"
Then if you go to that section... section.
All connected clients will continue to run and are manageable unchanged
And the doc: https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/server-major-upgra... *Maybe* you can fix the issues by doing the procedure again, EXCEPT the call to /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh (step 3) as you already did that. That should work, but if you have a backup, it's better if you restore it and start the upgrade again. TBH, I wonder if we should not just remove the "Upgrade the Database" and integrate it with the the "Server - Major Upgrade" section. Joseph any opinion?
All package issues seemingly resolved, apart from this:
Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch Solution 1: Following actions will be done: deinstallation of apache-commons-el-1.0-bp153.2.24.noarch deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64 deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64 deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch deinstallation of uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch Solution 2: do not install patch:SUSE-2020-3767-1.noarch
(Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update)
The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.)
24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36] 24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn] 24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn] java.lang.IllegalStateException: Error starting child at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java: 720) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690 ) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java :1132 ) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java: 1865) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executor s.jav a:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abstract Execu torService.java:118) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.ja va:10 44) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:429) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1575) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java: 309) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBas e.jav a:123) at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java: 423) at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366 ) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.jav a:936 ) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java: 841) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.j ava:1 384) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.j ava:1 374) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abstract Execu torService.java:140) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.jav a:909 ) at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.j ava:2 62) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardService.startInternal(StandardService .java :421) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardServer.startInternal(StandardServer.j ava:9 30) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.startup.Catalina.start(Catalina.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeM ethod AccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Del egati ngMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474) Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/rhn ]] at org.apache.catalina.util.LifecycleBase.handleSubClassException(Lifecyc leBas e.java:440) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java: 717) ... 37 more Caused by: java.lang.NullPointerException at org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarScan ner.j ava:382) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScanner .java :195) at org.apache.catalina.startup.ContextConfig.processJarsForWebFragments(C ontex tConfig.java:1971) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java :1129 ) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig .java :775) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig .java :301) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBas e.jav a:123) at org.apache.catalina.core.StandardContext.startInternal(StandardContext .java :5044) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ... 38 more 24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms 24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"] 24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"] 24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"] 24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds
Simon Avery Linux Systems Administrator
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
On retrying with the Major upgrade path, things were much more positive (and simpler) and the update completed. One warning about the master GPG key, but otherwise things look good - and I can see that modules.yaml is now populating in the repo. Hopefully that will fix my Centos/Rocky 8.4 issues. Thanks -----Original Message----- From: Simon Avery Sent: 24 June 2021 14:26 To: 'Julio Gonzalez' <jgonzalez@suse.com>; uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Joseph Cayouette <JCayouette@suse.com> Subject: RE: Unable to restart Uyuni following Uyuni upgrade 2021-06 Thanks, Julio. It looks like my mistake was following the minor-upgrade steps, then going off piste with manual steps. I've now rolled back to the starting place, and will have another stab at it tomorrow using the link you provided. S -----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 24 June 2021 13:31 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk>; Joseph Cayouette <JCayouette@suse.com> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On jueves, 24 de junio de 2021 14:20:41 (CEST) Simon Avery wrote:
Hi Folk,
Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system.
Please advise!
'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls)
Steps I have done:
* Changed repos and upgraded base OS to Leap 15.3
* Upgraded database as per https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-migration -13.h tml * Changed the Uyuni repo. * Normal Uyuni upgrade.
Wrong steps. https://www.uyuni-project.org/doc/2021.06/release-notes-uyuni-server.html
Upgrade notes WARNING: Check "Update from previous versions of Uyuni Server" section below for details, as this release updates the base OS from openSUSE Leap 15.2 to openSUSE Leap 15.3, and there are special steps required. You need at least Uyuni 2020.07 already installed to perform the upgrade.
Update from previous versions of Uyuni Server WARNING: Make sure you check the documentation this time. Because of the change from openSUSE Leap 15.2 to openSUSE Leap 15.3, some special steps are required! WARNING: This applies not only when updating from 2021.05, but also when updating from any version after 2020.07 (included). Updating from 2020.06 and older is not supported anymore. See the "Upgrade Guide" for detailed instructions on how to upgrade. You will need to follow the "Upgrade the Server" > "Server - Major Upgrade"
Then if you go to that section... section.
All connected clients will continue to run and are manageable unchanged
And the doc: https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/server-major-upgra... *Maybe* you can fix the issues by doing the procedure again, EXCEPT the call to /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh (step 3) as you already did that. That should work, but if you have a backup, it's better if you restore it and start the upgrade again. TBH, I wonder if we should not just remove the "Upgrade the Database" and integrate it with the the "Server - Major Upgrade" section. Joseph any opinion?
All package issues seemingly resolved, apart from this:
Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch Solution 1: Following actions will be done: deinstallation of apache-commons-el-1.0-bp153.2.24.noarch deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64 deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64 deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch deinstallation of uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch Solution 2: do not install patch:SUSE-2020-3767-1.noarch
(Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update)
The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.)
24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36] 24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn] 24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn] java.lang.IllegalStateException: Error starting child at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java: 720) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690 ) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java :1132 ) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java: 1865) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executor s.jav a:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abstract Execu torService.java:118) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.ja va:10 44) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:429) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1575) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java: 309) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBas e.jav a:123) at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java: 423) at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366 ) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.jav a:936 ) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java: 841) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.j ava:1 384) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.j ava:1 374) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abstract Execu torService.java:140) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.jav a:909 ) at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.j ava:2 62) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardService.startInternal(StandardService .java :421) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardServer.startInternal(StandardServer.j ava:9 30) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.startup.Catalina.start(Catalina.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeM ethod AccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Del egati ngMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474) Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/rhn ]] at org.apache.catalina.util.LifecycleBase.handleSubClassException(Lifecyc leBas e.java:440) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java: 717) ... 37 more Caused by: java.lang.NullPointerException at org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarScan ner.j ava:382) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScanner .java :195) at org.apache.catalina.startup.ContextConfig.processJarsForWebFragments(C ontex tConfig.java:1971) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java :1129 ) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig .java :775) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig .java :301) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBas e.jav a:123) at org.apache.catalina.core.StandardContext.startInternal(StandardContext .java :5044) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ... 38 more 24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms 24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"] 24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"] 24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"] 24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds
Simon Avery Linux Systems Administrator
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
On viernes, 25 de junio de 2021 10:30:06 (CEST) Simon Avery wrote:
On retrying with the Major upgrade path, things were much more positive (and simpler) and the update completed.
One warning about the master GPG key, but otherwise things look good - and I can see that modules.yaml is now populating in the repo.
What do you mean? What master GPG key :-?
Hopefully that will fix my Centos/Rocky 8.4 issues.
Thanks
-----Original Message----- From: Simon Avery Sent: 24 June 2021 14:26 To: 'Julio Gonzalez' <jgonzalez@suse.com>; uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Joseph Cayouette <JCayouette@suse.com> Subject: RE: Unable to restart Uyuni following Uyuni upgrade 2021-06
Thanks, Julio.
It looks like my mistake was following the minor-upgrade steps, then going off piste with manual steps.
I've now rolled back to the starting place, and will have another stab at it tomorrow using the link you provided.
S
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 24 June 2021 13:31 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk>; Joseph Cayouette <JCayouette@suse.com> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On jueves, 24 de junio de 2021 14:20:41 (CEST) Simon Avery wrote:
Hi Folk,
Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system.
Please advise!
'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls)
Steps I have done: * Changed repos and upgraded base OS to Leap 15.3
* Upgraded database as per
https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-migration -13.h tml
* Changed the Uyuni repo. * Normal Uyuni upgrade.
Wrong steps.
https://www.uyuni-project.org/doc/2021.06/release-notes-uyuni-server.html
Upgrade notes WARNING: Check "Update from previous versions of Uyuni Server" section below
for details, as this release updates the base OS from openSUSE Leap 15.2 to openSUSE Leap 15.3, and there are special steps required. You need at least Uyuni 2020.07 already installed to perform the upgrade.
Then if you go to that section...
Update from previous versions of Uyuni Server WARNING: Make sure you check the documentation this time. Because of the
change from openSUSE Leap 15.2 to openSUSE Leap 15.3, some special steps are required! WARNING: This applies not only when updating from 2021.05, but also when updating from any version after 2020.07 (included). Updating from 2020.06 and older is not supported anymore.
See the "Upgrade Guide" for detailed instructions on how to upgrade. You
will need to follow the "Upgrade the Server" > "Server - Major Upgrade" section.
All connected clients will continue to run and are manageable unchanged
And the doc: https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/server-major-upgr ade-uyuni.html
*Maybe* you can fix the issues by doing the procedure again, EXCEPT the call to /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh (step 3) as you already did that.
That should work, but if you have a backup, it's better if you restore it and start the upgrade again.
TBH, I wonder if we should not just remove the "Upgrade the Database" and integrate it with the the "Server - Major Upgrade" section.
Joseph any opinion?
All package issues seemingly resolved, apart from this:
Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch Solution 1: Following actions
will be done: deinstallation of apache-commons-el-1.0-bp153.2.24.noarch deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64 deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64 deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch deinstallation of
uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch Solution 2: do not install patch:SUSE-2020-3767-1.noarch
(Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update)
The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.)
24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36] 24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn] 24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn] java.lang.IllegalStateException: Error starting child
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java : 720) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:690 ) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java
:1132
) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java : 1865) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executor s.jav a:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abstract Execu torService.java:118) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.ja va:10 44) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:429) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1575) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java: 309) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBas e.jav a:123) at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBase.java : 423) at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:366 ) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.jav a:936 ) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java: 841) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.j ava:1 384) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.j ava:1 374) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abstract Execu torService.java:140) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.jav a:909 ) at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.j ava:2 62) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardService.startInternal(StandardService .java
:421) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardServer.startInternal(StandardServer.j ava:9 30) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.startup.Catalina.start(Catalina.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeM ethod AccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Del egati ngMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474) Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/rhn ]] at org.apache.catalina.util.LifecycleBase.handleSubClassException(Lifecyc leBas e.java:440) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java : 717) ... 37 more
Caused by: java.lang.NullPointerException
at
org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarScan ner.j ava:382) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScanner .java
:195) at
org.apache.catalina.startup.ContextConfig.processJarsForWebFragments(C ontex tConfig.java:1971) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java
:1129
) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig .java
:775) at
org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig .java
:301) at
org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBas e.jav a:123) at org.apache.catalina.core.StandardContext.startInternal(StandardContext .java
:5044) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ... 38 more 24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms 24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"] 24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"] 24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"] 24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds
Simon Avery Linux Systems Administrator
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
At the end of the migration script, it showed this, prompting me to review scrollback. Migration went wrong. Please fix the issues and try again. The only error I could see was: (1396/1527) Installing: uyuni-build-keys-2021.06-3.2.uyuni1.noarch ...................................................................................................[done] Additional rpm output: importing Uyuni build key to rpm keyring... gpg: public key of ultimately trusted key 8EFD162952047CD0 not found importing the key from the file /usr/lib/uyuni/uyuni-build-keys.gpg returned an error. This should not happen. It may not be possible to properly verify the authenticity of rpm packages from SUSE sources. The keyring containing the SUSE rpm package signing key can be found in the root directory of the first CD (DVD) of your SUSE product. warning: %post(uyuni-build-keys-2021.06-3.2.uyuni1.noarch) scriptlet failed, exit status 255 -----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 09:36 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25 de junio de 2021 10:30:06 (CEST) Simon Avery wrote:
On retrying with the Major upgrade path, things were much more positive (and simpler) and the update completed.
One warning about the master GPG key, but otherwise things look good - and I can see that modules.yaml is now populating in the repo.
What do you mean? What master GPG key :-?
Hopefully that will fix my Centos/Rocky 8.4 issues.
Thanks
-----Original Message----- From: Simon Avery Sent: 24 June 2021 14:26 To: 'Julio Gonzalez' <jgonzalez@suse.com>; uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Joseph Cayouette <JCayouette@suse.com> Subject: RE: Unable to restart Uyuni following Uyuni upgrade 2021-06
Thanks, Julio.
It looks like my mistake was following the minor-upgrade steps, then going off piste with manual steps.
I've now rolled back to the starting place, and will have another stab at it tomorrow using the link you provided.
S
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 24 June 2021 13:31 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk>; Joseph Cayouette <JCayouette@suse.com> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On jueves, 24 de junio de 2021 14:20:41 (CEST) Simon Avery wrote:
Hi Folk,
Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system.
Please advise!
'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls)
Steps I have done: * Changed repos and upgraded base OS to Leap 15.3
* Upgraded database as per
https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-migrati on -13.h tml
* Changed the Uyuni repo. * Normal Uyuni upgrade.
Wrong steps.
https://www.uyuni-project.org/doc/2021.06/release-notes-uyuni-server.h tml
Upgrade notes WARNING: Check "Update from previous versions of Uyuni Server" section below
for details, as this release updates the base OS from openSUSE Leap 15.2 to openSUSE Leap 15.3, and there are special steps required. You need at least Uyuni 2020.07 already installed to perform the upgrade.
Then if you go to that section...
Update from previous versions of Uyuni Server WARNING: Make sure you check the documentation this time. Because of the
change from openSUSE Leap 15.2 to openSUSE Leap 15.3, some special steps are required! WARNING: This applies not only when updating from 2021.05, but also when updating from any version after 2020.07 (included). Updating from 2020.06 and older is not supported anymore.
See the "Upgrade Guide" for detailed instructions on how to upgrade. You
will need to follow the "Upgrade the Server" > "Server - Major Upgrade" section.
All connected clients will continue to run and are manageable unchanged
And the doc: https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/server-major -upgr ade-uyuni.html
*Maybe* you can fix the issues by doing the procedure again, EXCEPT the call to /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh (step 3) as you already did that.
That should work, but if you have a backup, it's better if you restore it and start the upgrade again.
TBH, I wonder if we should not just remove the "Upgrade the Database" and integrate it with the the "Server - Major Upgrade" section.
Joseph any opinion?
All package issues seemingly resolved, apart from this:
Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch Solution 1: Following actions
will be done: deinstallation of apache-commons-el-1.0-bp153.2.24.noarch deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64 deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64 deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch deinstallation of
uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch Solution 2: do not install patch:SUSE-2020-3767-1.noarch
(Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update)
The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.)
24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36] 24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn] 24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn] java.lang.IllegalStateException: Error starting child
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBas e.java : 720) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:6 90 ) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705 ) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.ja va
:1132
) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfi g.java : 1865) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Execut or s.jav a:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineE xe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abstra ct Execu torService.java:118) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig. ja va:10 44) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:42 9) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1575) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java: 309) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleB as e.jav a:123) at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBas e.java : 423) at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:3 66 ) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.j av a:936 ) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java: 841) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase .j ava:1 384) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase .j ava:1 374) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineE xe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abstra ct Execu torService.java:140) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.j av a:909 ) at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine .j ava:2 62) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardService.startInternal(StandardServi ce .java
:421) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardServer.startInternal(StandardServer .j ava:9 30) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.startup.Catalina.start(Catalina.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Nati ve Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Nativ eM ethod AccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(D el egati ngMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474) Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/r hn ]] at org.apache.catalina.util.LifecycleBase.handleSubClassException(Lifec yc leBas e.java:440) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBas e.java : 717) ... 37 more
Caused by: java.lang.NullPointerException
at
org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarSc an ner.j ava:382) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScann er .java
:195) at
org.apache.catalina.startup.ContextConfig.processJarsForWebFragments (C ontex tConfig.java:1971) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.ja va
:1129
) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConf ig .java
:775) at
org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConf ig .java
:301) at
org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleB as e.jav a:123) at org.apache.catalina.core.StandardContext.startInternal(StandardConte xt .java
:5044) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ... 38 more 24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms 24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"] 24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"] 24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"] 24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds
Simon Avery Linux Systems Administrator
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
On viernes, 25 de junio de 2021 11:02:20 (CEST) Simon Avery wrote:
At the end of the migration script, it showed this, prompting me to review scrollback.
Migration went wrong. Please fix the issues and try again.
So my guess is that anyway you went ahead and executed the PostgreSQL migration script, as the log had a warning but not an error? I didn't really see this error on the migration test I did. I will repeate it now, just in case.
The only error I could see was:
(1396/1527) Installing: uyuni-build-keys-2021.06-3.2.uyuni1.noarch ........................................................................... ........................[done] Additional rpm output: importing Uyuni build key to rpm keyring... gpg: public key of ultimately trusted key 8EFD162952047CD0 not found importing the key from the file /usr/lib/uyuni/uyuni-build-keys.gpg returned an error. This should not happen. It may not be possible to properly verify the authenticity of rpm packages from SUSE sources. The keyring containing the SUSE rpm package signing key can be found in the root directory of the first CD (DVD) of your SUSE product. warning: %post(uyuni-build-keys-2021.06-3.2.uyuni1.noarch) scriptlet failed,
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 09:36 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25 de junio de 2021 10:30:06 (CEST) Simon Avery wrote:
On retrying with the Major upgrade path, things were much more positive (and simpler) and the update completed.
One warning about the master GPG key, but otherwise things look good - and I can see that modules.yaml is now populating in the repo.
What do you mean? What master GPG key :-?
Hopefully that will fix my Centos/Rocky 8.4 issues.
Thanks
-----Original Message----- From: Simon Avery Sent: 24 June 2021 14:26 To: 'Julio Gonzalez' <jgonzalez@suse.com>; uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Joseph Cayouette <JCayouette@suse.com> Subject: RE: Unable to restart Uyuni following Uyuni upgrade 2021-06
Thanks, Julio.
It looks like my mistake was following the minor-upgrade steps, then going off piste with manual steps.
I've now rolled back to the starting place, and will have another stab at it tomorrow using the link you provided.
S
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 24 June 2021 13:31 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk>; Joseph Cayouette <JCayouette@suse.com> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On jueves, 24 de junio de 2021
14:20:41 (CEST) Simon Avery wrote:
Hi Folk,
Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system.
Please advise!
'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls)
Steps I have done: * Changed repos and upgraded base OS to Leap 15.3
* Upgraded database as per
https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-migrati on -13.h tml
* Changed the Uyuni repo. * Normal Uyuni upgrade.
Wrong steps.
https://www.uyuni-project.org/doc/2021.06/release-notes-uyuni-server.h tml
Upgrade notes WARNING: Check "Update from previous versions of Uyuni Server" section below
for details, as this release updates the base OS from openSUSE Leap 15.2 to openSUSE Leap 15.3, and there are special steps required. You need at least Uyuni 2020.07 already installed to perform the upgrade.
Then if you go to that section...
Update from previous versions of Uyuni Server WARNING: Make sure you check the documentation this time. Because of the
change from openSUSE Leap 15.2 to openSUSE Leap 15.3, some special steps are required! WARNING: This applies not only when updating from 2021.05, but also when updating from any version after 2020.07 (included). Updating from 2020.06 and older is not supported anymore.
See the "Upgrade Guide" for detailed instructions on how to upgrade. You
will need to follow the "Upgrade the Server" > "Server - Major Upgrade" section.
All connected clients will continue to run and are manageable unchanged
And the doc: https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/server-major -upgr ade-uyuni.html
*Maybe* you can fix the issues by doing the procedure again, EXCEPT the call to /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh (step 3) as you already did that.
That should work, but if you have a backup, it's better if you restore it and start the upgrade again.
TBH, I wonder if we should not just remove the "Upgrade the Database" and integrate it with the the "Server - Major Upgrade" section.
Joseph any opinion?
All package issues seemingly resolved, apart from this:
Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch Solution 1: Following actions
will be done: deinstallation of apache-commons-el-1.0-bp153.2.24.noarch deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64 deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64 deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch deinstallation of
uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch Solution 2: do not install patch:SUSE-2020-3767-1.noarch
(Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update)
The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.)
24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36] 24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn] 24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn] java.lang.IllegalStateException: Error starting child
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBas e.java
720) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:6 90 ) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:705 ) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.ja va
:1132
) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfi g.java
1865) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Execut or s.jav a:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineE xe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abstra ct Execu torService.java:118) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig. ja va:10 44) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:42 9) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1575) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java: 309) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleB as e.jav a:123) at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleBas e.java
423) at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java:3 66 ) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.j av a:936 ) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java: 841) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase .j ava:1 384) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase .j ava:1 374) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineE xe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abstra ct Execu torService.java:140) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.j av a:909 ) at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine .j ava:2 62) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardService.startInternal(StandardServi ce .java
:421) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.core.StandardServer.startInternal(StandardServer .j ava:9 30) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) at org.apache.catalina.startup.Catalina.start(Catalina.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Nati ve Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Nativ eM ethod AccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(D el egati ngMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474) Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[/r hn ]] at org.apache.catalina.util.LifecycleBase.handleSubClassException(Lifec yc leBas e.java:440) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:198) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBas e.java
717) ... 37 more
Caused by: java.lang.NullPointerException
at
org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarSc an ner.j ava:382) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScann er .java
:195) at
org.apache.catalina.startup.ContextConfig.processJarsForWebFragments (C ontex tConfig.java:1971) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.ja va
:1129
) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConf ig .java
:775) at
org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConf ig .java
:301) at
org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleB as e.jav a:123) at org.apache.catalina.core.StandardContext.startInternal(StandardConte xt .java
:5044) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183) ... 38 more 24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms 24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"] 24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"] 24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"] 24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds
Simon Avery Linux Systems Administrator
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
I rolled right back from yesterday to a 2021-05 state, so none of yesterday's attempts affected it - so please disregard any of that. This error was at the end of ` /usr/lib/susemanager/bin/server-migrator.sh` Once rebooted, I then ran ` /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh` which completed normally, and Uyuni started up. S -----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 10:26 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org; Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25 de junio de 2021 11:02:20 (CEST) Simon Avery wrote:
At the end of the migration script, it showed this, prompting me to review scrollback.
Migration went wrong. Please fix the issues and try again.
So my guess is that anyway you went ahead and executed the PostgreSQL migration script, as the log had a warning but not an error? I didn't really see this error on the migration test I did. I will repeate it now, just in case.
The only error I could see was:
(1396/1527) Installing: uyuni-build-keys-2021.06-3.2.uyuni1.noarch ........................................................................... ........................[done] Additional rpm output: importing Uyuni build key to rpm keyring... gpg: public key of ultimately trusted key 8EFD162952047CD0 not found importing the key from the file /usr/lib/uyuni/uyuni-build-keys.gpg returned an error. This should not happen. It may not be possible to properly verify the authenticity of rpm packages from SUSE sources. The keyring containing the SUSE rpm package signing key can be found in the root directory of the first CD (DVD) of your SUSE product. warning: %post(uyuni-build-keys-2021.06-3.2.uyuni1.noarch) scriptlet failed,
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 09:36 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25 de junio de 2021 10:30:06 (CEST) Simon Avery wrote:
On retrying with the Major upgrade path, things were much more positive (and simpler) and the update completed.
One warning about the master GPG key, but otherwise things look good - and I can see that modules.yaml is now populating in the repo.
What do you mean? What master GPG key :-?
Hopefully that will fix my Centos/Rocky 8.4 issues.
Thanks
-----Original Message----- From: Simon Avery Sent: 24 June 2021 14:26 To: 'Julio Gonzalez' <jgonzalez@suse.com>; uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Joseph Cayouette <JCayouette@suse.com> Subject: RE: Unable to restart Uyuni following Uyuni upgrade 2021-06
Thanks, Julio.
It looks like my mistake was following the minor-upgrade steps, then going off piste with manual steps.
I've now rolled back to the starting place, and will have another stab at it tomorrow using the link you provided.
S
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 24 June 2021 13:31 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk>; Joseph Cayouette <JCayouette@suse.com> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On jueves, 24 de junio de 2021
14:20:41 (CEST) Simon Avery wrote:
Hi Folk,
Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system.
Please advise!
'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls)
Steps I have done: * Changed repos and upgraded base OS to Leap 15.3
* Upgraded database as per
https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-migra ti on -13.h tml
* Changed the Uyuni repo. * Normal Uyuni upgrade.
Wrong steps.
https://www.uyuni-project.org/doc/2021.06/release-notes-uyuni-server .h tml
Upgrade notes WARNING: Check "Update from previous versions of Uyuni Server" section below
for details, as this release updates the base OS from openSUSE Leap 15.2 to openSUSE Leap 15.3, and there are special steps required. You need at least Uyuni 2020.07 already installed to perform the upgrade.
Then if you go to that section...
Update from previous versions of Uyuni Server WARNING: Make sure you check the documentation this time. Because of the
change from openSUSE Leap 15.2 to openSUSE Leap 15.3, some special steps are required! WARNING: This applies not only when updating from 2021.05, but also when updating from any version after 2020.07 (included). Updating from 2020.06 and older is not supported anymore.
See the "Upgrade Guide" for detailed instructions on how to upgrade. You
will need to follow the "Upgrade the Server" > "Server - Major Upgrade" section.
All connected clients will continue to run and are manageable unchanged
And the doc: https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/server-maj or -upgr ade-uyuni.html
*Maybe* you can fix the issues by doing the procedure again, EXCEPT the call to /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh (step 3) as you already did that.
That should work, but if you have a backup, it's better if you restore it and start the upgrade again.
TBH, I wonder if we should not just remove the "Upgrade the Database" and integrate it with the the "Server - Major Upgrade" section.
Joseph any opinion?
All package issues seemingly resolved, apart from this:
Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch Solution 1: Following actions
will be done: deinstallation of apache-commons-el-1.0-bp153.2.24.noarch deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64 deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64 deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch deinstallation of
uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch Solution 2: do not install patch:SUSE-2020-3767-1.noarch
(Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update)
The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.)
24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36] 24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn] 24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn] java.lang.IllegalStateException: Error starting child
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerB as e.java
720) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java :6 90 ) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:7 05 ) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig. ja va
:1132
) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostCon fi g.java
1865) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Exec ut or s.jav a:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(Inlin eE xe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abst ra ct Execu torService.java:118) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig. ja va:10 44) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java: 42 9) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1575) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java: 309) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(Lifecycl eB as e.jav a:123) at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleB as e.java
423) at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java :3 66 ) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase .j av a:936 ) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java: 841) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:18 3) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBa se .j ava:1 384) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBa se .j ava:1 374) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(Inlin eE xe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abst ra ct Execu torService.java:140) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase .j av a:909 ) at org.apache.catalina.core.StandardEngine.startInternal(StandardEngi ne .j ava:2 62) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:18 3) at org.apache.catalina.core.StandardService.startInternal(StandardSer vi ce .java
:421) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:18 3) at org.apache.catalina.core.StandardServer.startInternal(StandardServ er .j ava:9 30) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:18 3) at org.apache.catalina.startup.Catalina.start(Catalina.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Na ti ve Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Nat iv eM ethod AccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (D el egati ngMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474) Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[ /r hn ]] at org.apache.catalina.util.LifecycleBase.handleSubClassException(Lif ec yc leBas e.java:440) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:19 8) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerB as e.java
717) ... 37 more
Caused by: java.lang.NullPointerException
at
org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJar Sc an ner.j ava:382) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarSca nn er .java
:195) at
org.apache.catalina.startup.ContextConfig.processJarsForWebFragmen ts (C ontex tConfig.java:1971) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig. ja va
:1129
) at org.apache.catalina.startup.ContextConfig.configureStart(ContextCo nf ig .java
:775) at
org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextCo nf ig .java
:301) at
org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(Lifecycl eB as e.jav a:123) at org.apache.catalina.core.StandardContext.startInternal(StandardCon te xt .java
:5044) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:18 3) ... 38 more 24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms 24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"] 24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"] 24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"] 24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds
Simon Avery Linux Systems Administrator
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
On viernes, 25 de junio de 2021 11:28:06 (CEST) Simon Avery wrote:
I rolled right back from yesterday to a 2021-05 state, so none of yesterday's attempts affected it - so please disregard any of that.
So the error happened yesterday, and not with the migration you performed today after the rollback, right? I an confirm that I could not reproduce this problem. In my case uyuni-build- keys installs without that warning.
This error was at the end of ` /usr/lib/susemanager/bin/server-migrator.sh`
Once rebooted, I then ran ` /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh` which completed normally, and Uyuni started up.
S
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 10:26 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org; Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25 de junio de 2021 11:02:20 (CEST) Simon Avery wrote:
At the end of the migration script, it showed this, prompting me to review scrollback.
Migration went wrong. Please fix the issues and try again.
So my guess is that anyway you went ahead and executed the PostgreSQL migration script, as the log had a warning but not an error?
I didn't really see this error on the migration test I did. I will repeate it now, just in case.
The only error I could see was:
(1396/1527) Installing: uyuni-build-keys-2021.06-3.2.uyuni1.noarch .......................................................................... . ........................[done] Additional rpm output: importing Uyuni build key to rpm keyring... gpg: public key of ultimately trusted key 8EFD162952047CD0 not found importing the key from the file /usr/lib/uyuni/uyuni-build-keys.gpg returned an error. This should not happen. It may not be possible to properly verify the authenticity of rpm packages from SUSE sources. The keyring containing the SUSE rpm package signing key can be found in the root directory of the first CD (DVD) of your SUSE product. warning: %post(uyuni-build-keys-2021.06-3.2.uyuni1.noarch) scriptlet failed,
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 09:36 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25 de junio de 2021 10:30:06 (CEST) Simon
Avery wrote:
On retrying with the Major upgrade path, things were much more positive (and simpler) and the update completed.
One warning about the master GPG key, but otherwise things look good - and I can see that modules.yaml is now populating in the repo.
What do you mean? What master GPG key :-?
Hopefully that will fix my Centos/Rocky 8.4 issues.
Thanks
-----Original Message----- From: Simon Avery Sent: 24 June 2021 14:26 To: 'Julio Gonzalez' <jgonzalez@suse.com>; uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Joseph Cayouette <JCayouette@suse.com> Subject: RE: Unable to restart Uyuni following Uyuni upgrade 2021-06
Thanks, Julio.
It looks like my mistake was following the minor-upgrade steps, then going off piste with manual steps.
I've now rolled back to the starting place, and will have another stab at it tomorrow using the link you provided.
S
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 24 June 2021 13:31 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk>; Joseph Cayouette <JCayouette@suse.com> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On jueves, 24 de junio de 2021
14:20:41 (CEST) Simon Avery wrote:
Hi Folk,
Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system.
Please advise!
'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls)
Steps I have done: * Changed repos and upgraded base OS to Leap 15.3
* Upgraded database as per
https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-migra ti on -13.h tml
* Changed the Uyuni repo. * Normal Uyuni upgrade.
Wrong steps.
https://www.uyuni-project.org/doc/2021.06/release-notes-uyuni-server .h tml
Upgrade notes WARNING: Check "Update from previous versions of Uyuni Server" section below
for details, as this release updates the base OS from openSUSE Leap 15.2 to openSUSE Leap 15.3, and there are special steps required. You need at least Uyuni 2020.07 already installed to perform the upgrade.
Then if you go to that section...
Update from previous versions of Uyuni Server WARNING: Make sure you check the documentation this time. Because of the
change from openSUSE Leap 15.2 to openSUSE Leap 15.3, some special steps are required! WARNING: This applies not only when updating from 2021.05, but also when updating from any version after 2020.07 (included). Updating from 2020.06 and older is not supported anymore.
See the "Upgrade Guide" for detailed instructions on how to upgrade. You
will need to follow the "Upgrade the Server" > "Server - Major Upgrade" section.
All connected clients will continue to run and are manageable unchanged
And the doc: https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/server-maj or -upgr ade-uyuni.html
*Maybe* you can fix the issues by doing the procedure again, EXCEPT the call to /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh (step 3) as you already did that.
That should work, but if you have a backup, it's better if you restore it and start the upgrade again.
TBH, I wonder if we should not just remove the "Upgrade the Database" and integrate it with the the "Server - Major Upgrade" section.
Joseph any opinion?
All package issues seemingly resolved, apart from this:
Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch Solution 1: Following actions
will be done: deinstallation of apache-commons-el-1.0-bp153.2.24.noarch deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64 deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64 deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch deinstallation of
uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch Solution 2: do not install patch:SUSE-2020-3767-1.noarch
(Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update)
The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.)
24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36] 24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn] 24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn] java.lang.IllegalStateException: Error starting child
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerB as e.java
720) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java
:6
90 ) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:7 05 ) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig. ja va
:1132
) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostCon fi g.java
1865) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Exec ut or s.jav a:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(Inlin eE xe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abst ra ct Execu torService.java:118) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig. ja va:10 44) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java: 42 9) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1575) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java: 309) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(Lifecycl eB as e.jav a:123) at org.apache.catalina.util.LifecycleBase.setStateInternal(LifecycleB as e.java
423) at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.java
:3
66 ) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase .j av a:936 ) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java: 841) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:18 3) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBa se .j ava:1 384) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBa se .j ava:1 374) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at org.apache.tomcat.util.threads.InlineExecutorService.execute(Inlin eE xe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Abst ra ct Execu torService.java:140) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase .j av a:909 ) at org.apache.catalina.core.StandardEngine.startInternal(StandardEngi ne .j ava:2 62) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:18 3) at org.apache.catalina.core.StandardService.startInternal(StandardSer vi ce .java
:421) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:18 3) at org.apache.catalina.core.StandardServer.startInternal(StandardServ er .j ava:9 30) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:18 3) at org.apache.catalina.startup.Catalina.start(Catalina.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Na ti ve Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Nat iv eM ethod AccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke (D el egati ngMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474) Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContext[ /r hn ]] at org.apache.catalina.util.LifecycleBase.handleSubClassException(Lif ec yc leBas e.java:440) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:19 8) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerB as e.java
717) ... 37 more
Caused by: java.lang.NullPointerException
at
org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJar Sc an ner.j ava:382) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarSca nn er .java
:195) at
org.apache.catalina.startup.ContextConfig.processJarsForWebFragmen ts (C ontex tConfig.java:1971) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig. ja va
:1129
) at org.apache.catalina.startup.ContextConfig.configureStart(ContextCo nf ig .java
:775) at
org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextCo nf ig .java
:301) at
org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(Lifecycl eB as e.jav a:123) at org.apache.catalina.core.StandardContext.startInternal(StandardCon te xt .java
:5044) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:18 3) ... 38 more 24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms 24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"] 24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"] 24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"] 24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds
Simon Avery Linux Systems Administrator
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
No - the error happened today during the major update. Additionally, when I try to register or re-register clients (Centos 7, for example), it fails and salt/minion logs this, which feels related; Can I add this key manually? 2021-06-25 13:43:32,162 [salt.loaded.int.module.cmdmod:842 ][ERROR ][10644] retcode: 1 2021-06-25 13:43:32,162 [salt.state :323 ][ERROR ][10644] {u'pid': 10737, u'retcode': 1, u'stderr': u'curl: (60) Peer\'s Certificate issuer is not recognized.\nMore details here: http://curl.haxx.se/docs/sslcerts.html\n\ncurl<http://curl.haxx.se/docs/sslcerts.html/n/ncurl> performs SSL certificate verification by default, using a "bundle"\n of Certificate Authority (CA) public keys (CA certs). If the default\n bundle file isn\'t adequate, you can specify an alternate file\n using the --cacert option.\nIf this HTTPS server uses a certificate signed by a CA represented in\n the bundle, the certificate verification probably failed due to a\n problem with the certificate (it might be expired, or the name might\n not match the domain name in the URL).\nIf you\'d like to turn off curl\'s verification of the certificate, use\n the -k (or --insecure) option.\nerror: https://ata-oxy-uyuni01.atass.com/pub/res-gpg-pubkey-0182b964.key: import read failed(2).', u'stdout': u''} 2021-06-25 13:43:32,590 [salt.loaded.int.module.cmdmod:836 ][ERROR ][10644] Command 'rpm' failed with return code: 1 2021-06-25 13:43:32,590 [salt.loaded.int.module.cmdmod:840 ][ERROR ][10644] stderr: curl: (60) Peer's Certificate issuer is not recognized. More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. error: https://ata-oxy-uyuni01.atass.com/pub/sle12-gpg-pubkey-39db7c82.key: import read failed(2). 2021-06-25 13:43:32,591 [salt.loaded.int.module.cmdmod:842 ][ERROR ][10644] retcode: 1 2021-06-25 13:43:32,591 [salt.state :323 ][ERROR ][10644] {u'pid': 10775, u'retcode': 1, u'stderr': u'curl: (60) Peer\'s Certificate issuer is not recognized.\nMore details here: http://curl.haxx.se/docs/sslcerts.html\n\ncurl<http://curl.haxx.se/docs/sslcerts.html/n/ncurl> performs SSL certificate verification by default, using a "bundle"\n of Certificate Authority (CA) public keys (CA certs). If the default\n bundle file isn\'t adequate, you can specify an alternate file\n using the --cacert option.\nIf this HTTPS server uses a certificate signed by a CA represented in\n the bundle, the certificate verification probably failed due to a\n problem with the certificate (it might be expired, or the name might\n not match the domain name in the URL).\nIf you\'d like to turn off curl\'s verification of the certificate, use\n the -k (or --insecure) option.\nerror: https://ata-oxy-uyuni01.atass.com/pub/sle12-gpg-pubkey-39db7c82.key: import read failed(2).', u'stdout': u''} -----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 11:52 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org; Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25 de junio de 2021 11:28:06 (CEST) Simon Avery wrote:
I rolled right back from yesterday to a 2021-05 state, so none of yesterday's attempts affected it - so please disregard any of that.
So the error happened yesterday, and not with the migration you performed today after the rollback, right? I an confirm that I could not reproduce this problem. In my case uyuni-build- keys installs without that warning.
This error was at the end of ` /usr/lib/susemanager/bin/server-migrator.sh`
Once rebooted, I then ran ` /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh` which completed normally, and Uyuni started up.
S
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 10:26 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org; Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25 de junio de 2021 11:02:20 (CEST) Simon Avery wrote:
At the end of the migration script, it showed this, prompting me to review scrollback.
Migration went wrong. Please fix the issues and try again.
So my guess is that anyway you went ahead and executed the PostgreSQL migration script, as the log had a warning but not an error?
I didn't really see this error on the migration test I did. I will repeate it now, just in case.
The only error I could see was:
(1396/1527) Installing: uyuni-build-keys-2021.06-3.2.uyuni1.noarch .......................................................................... . ........................[done] Additional rpm output: importing Uyuni build key to rpm keyring... gpg: public key of ultimately trusted key 8EFD162952047CD0 not found importing the key from the file /usr/lib/uyuni/uyuni-build-keys.gpg returned an error. This should not happen. It may not be possible to properly verify the authenticity of rpm packages from SUSE sources. The keyring containing the SUSE rpm package signing key can be found in the root directory of the first CD (DVD) of your SUSE product. warning: %post(uyuni-build-keys-2021.06-3.2.uyuni1.noarch) scriptlet failed,
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 09:36 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25 de junio de 2021 10:30:06 (CEST) Simon
Avery wrote:
On retrying with the Major upgrade path, things were much more positive (and simpler) and the update completed.
One warning about the master GPG key, but otherwise things look good - and I can see that modules.yaml is now populating in the repo.
What do you mean? What master GPG key :-?
Hopefully that will fix my Centos/Rocky 8.4 issues.
Thanks
-----Original Message----- From: Simon Avery Sent: 24 June 2021 14:26 To: 'Julio Gonzalez' <jgonzalez@suse.com>; uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Joseph Cayouette <JCayouette@suse.com> Subject: RE: Unable to restart Uyuni following Uyuni upgrade 2021-06
Thanks, Julio.
It looks like my mistake was following the minor-upgrade steps, then going off piste with manual steps.
I've now rolled back to the starting place, and will have another stab at it tomorrow using the link you provided.
S
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 24 June 2021 13:31 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org Cc: Simon Avery <Simon.Avery@atass-sports.co.uk>; Joseph Cayouette <JCayouette@suse.com> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On jueves, 24 de junio de 2021
14:20:41 (CEST) Simon Avery wrote:
Hi Folk,
Thanks for the new release - but I'm having significant issues (around four hours so far) in updating this and currently have a broken system.
Please advise!
'spacewalk-server status' reports all is fine, however the webui times out (At other stages, it was returning a 404 for all urls)
Steps I have done: * Changed repos and upgraded base OS to Leap 15.3
* Upgraded database as per
https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/db-mig ra ti on -13.h tml
* Changed the Uyuni repo. * Normal Uyuni upgrade.
Wrong steps.
https://www.uyuni-project.org/doc/2021.06/release-notes-uyuni-serv er .h tml
Upgrade notes WARNING: Check "Update from previous versions of Uyuni Server" section below
for details, as this release updates the base OS from openSUSE Leap 15.2 to openSUSE Leap 15.3, and there are special steps required. You need at least Uyuni 2020.07 already installed to perform the upgrade.
Then if you go to that section...
Update from previous versions of Uyuni Server WARNING: Make sure you check the documentation this time. Because of the
change from openSUSE Leap 15.2 to openSUSE Leap 15.3, some special steps are required! WARNING: This applies not only when updating from 2021.05, but also when updating from any version after 2020.07 (included). Updating from 2020.06 and older is not supported anymore.
See the "Upgrade Guide" for detailed instructions on how to upgrade. You
will need to follow the "Upgrade the Server" > "Server - Major Upgrade" section.
All connected clients will continue to run and are manageable unchanged
And the doc: https://www.uyuni-project.org/uyuni-docs/en/uyuni/upgrade/server-m aj or -upgr ade-uyuni.html
*Maybe* you can fix the issues by doing the procedure again, EXCEPT the call to /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh (step 3) as you already did that.
That should work, but if you have a backup, it's better if you restore it and start the upgrade again.
TBH, I wonder if we should not just remove the "Upgrade the Database" and integrate it with the the "Server - Major Upgrade" section.
Joseph any opinion?
All package issues seemingly resolved, apart from this:
Problem: the to be installed patch:SUSE-2020-3767-1.noarch conflicts with 'apache-commons-el < 1.0-3.3.1' provided by the installed apache-commons-el-1.0-bp153.2.24.noarch Solution 1: Following actions
will be done: deinstallation of apache-commons-el-1.0-bp153.2.24.noarch deinstallation of spacewalk-java-4.2.23-1.7.uyuni1.noarch deinstallation of spacewalk-common-4.2.3-1.6.uyuni1.noarch deinstallation of spacewalk-postgresql-4.2.3-1.6.uyuni1.noarch deinstallation of patterns-uyuni_server-2021.06-2.3.uyuni1.x86_64 deinstallation of susemanager-4.2.19-1.2.uyuni1.x86_64 deinstallation of supportutils-plugin-susemanager-4.2.2-2.4.uyuni1.noarch deinstallation of
uyuni-cluster-provider-caasp-4.2.3-1.4.uyuni1.noarch Solution 2: do not install patch:SUSE-2020-3767-1.noarch
(Did try #1, then reinstalling uyuni-server patterns, but it just reappears. ISTR there was a breaking issue with apache-commons-el in a previous update)
The main issue appears to be related to this, in catalina's logs: (The dir is present and has an index.jsp file and dirs.)
24-Jun-2021 11:24:31.328 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.36] 24-Jun-2021 11:24:31.357 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/srv/tomcat/webapps/rhn] 24-Jun-2021 11:24:33.950 SEVERE [main] org.apache.catalina.startup.HostConfig.deployDirectory Error deploying web application directory [/srv/tomcat/webapps/rhn] java.lang.IllegalStateException: Error starting child
at
org.apache.catalina.core.ContainerBase.addChildInternal(Containe rB as e.java
720) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.ja va
:6
90 ) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java :7 05 ) at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig. ja va
:1132
) at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostC on fi g.java
1865) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Ex ec ut or s.jav a:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:26 4) at org.apache.tomcat.util.threads.InlineExecutorService.execute(Inl in eE xe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Ab st ra ct Execu torService.java:118) at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig. ja va:10 44) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java: 42 9) at org.apache.catalina.startup.HostConfig.start(HostConfig.java:157 5) at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java: 309) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(Lifecy cl eB as e.jav a:123) at org.apache.catalina.util.LifecycleBase.setStateInternal(Lifecycl eB as e.java
423) at org.apache.catalina.util.LifecycleBase.setState(LifecycleBase.ja va
:3
66 ) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBa se .j av a:936 ) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java: 841) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java: 18 3) at org.apache.catalina.core.ContainerBase$StartChild.call(Container Ba se .j ava:1 384) at org.apache.catalina.core.ContainerBase$StartChild.call(Container Ba se .j ava:1 374) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:26 4) at org.apache.tomcat.util.threads.InlineExecutorService.execute(Inl in eE xe cutor Service.java:75) at java.base/java.util.concurrent.AbstractExecutorService.submit(Ab st ra ct Execu torService.java:140) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBa se .j av a:909 ) at org.apache.catalina.core.StandardEngine.startInternal(StandardEn gi ne .j ava:2 62) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java: 18 3) at org.apache.catalina.core.StandardService.startInternal(StandardS er vi ce .java
:421) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java: 18 3) at org.apache.catalina.core.StandardServer.startInternal(StandardSe rv er .j ava:9 30) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java: 18 3) at org.apache.catalina.startup.Catalina.start(Catalina.java:633) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0( Na ti ve Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(N at iv eM ethod AccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invo ke (D el egati ngMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:343) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:474) Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Catalina].StandardHost[localhost].StandardContex t[ /r hn ]] at org.apache.catalina.util.LifecycleBase.handleSubClassException(L if ec yc leBas e.java:440) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java: 19 8) at org.apache.catalina.core.ContainerBase.addChildInternal(Containe rB as e.java
717) ... 37 more
Caused by: java.lang.NullPointerException
at
org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJ ar Sc an ner.j ava:382) at org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarS ca nn er .java
:195) at
org.apache.catalina.startup.ContextConfig.processJarsForWebFragm en ts (C ontex tConfig.java:1971) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig. ja va
:1129
) at org.apache.catalina.startup.ContextConfig.configureStart(Context Co nf ig .java
:775) at
org.apache.catalina.startup.ContextConfig.lifecycleEvent(Context Co nf ig .java
:301) at
org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(Lifecy cl eB as e.jav a:123) at org.apache.catalina.core.StandardContext.startInternal(StandardC on te xt .java
:5044) at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java: 18 3) ... 38 more 24-Jun-2021 11:24:33.958 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/srv/tomcat/webapps/rhn] has finished in [2,601] ms 24-Jun-2021 11:24:33.990 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-127.0.0.1-8009"] 24-Jun-2021 11:24:34.297 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-0:0:0:0:0:0:0:1-8009"] 24-Jun-2021 11:24:34.509 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-127.0.0.1-8080"] 24-Jun-2021 11:24:34.564 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [3,354] milliseconds
Simon Avery Linux Systems Administrator
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
-- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
On viernes, 25 de junio de 2021 14:58:03 (CEST) Simon Avery wrote:
No - the error happened today during the major update.
I can't reproduce your issue. For me the package installs without that warning. Is there anything particular about that system? The complain looks like a problem with the keystore. Can anyone reproduce that problem?
Additionally, when I try to register or re-register clients (Centos 7, for example), it fails and salt/minion logs this, which feels related;
Can I add this key manually?
I think your problem is not the GPG key, but the SSL certificate installed at the proxy the Proxy. Did you follow the instructions to deploy the proxy, or are you using your own SSL certificates?
2021-06-25 13:43:32,162 [salt.loaded.int.module.cmdmod:842 ][ERROR ][10644] retcode: 1 2021-06-25 13:43:32,162 [salt.state :323 ][ERROR ][10644] {u'pid': 10737, u'retcode': 1, u'stderr': u'curl: (60) Peer\'s Certificate issuer is not recognized.\nMore details here: http://curl.haxx.se/docs/sslcerts.html\n\ncurl<http://curl.haxx.se/docs/ssl certs.html/n/ncurl> performs SSL certificate verification by default, using a "bundle"\n of Certificate Authority (CA) public keys (CA certs). If the default\n bundle file isn\'t adequate, you can specify an alternate file\n using the --cacert option.\nIf this HTTPS server uses a certificate signed by a CA represented in\n the bundle, the certificate verification probably failed due to a\n problem with the certificate (it might be expired, or the name might\n not match the domain name in the URL).\nIf you\'d like to turn off curl\'s verification of the certificate, use\n the -k (or --insecure) option.\nerror: https://ata-oxy-uyuni01.atass.com/pub/res-gpg-pubkey-0182b964.key: import read failed(2).', u'stdout': u''} 2021-06-25 13:43:32,590 [salt.loaded.int.module.cmdmod:836 ][ERROR ][10644] Command 'rpm' failed with return code: 1 2021-06-25 13:43:32,590 [salt.loaded.int.module.cmdmod:840 ][ERROR ][10644] stderr: curl: (60) Peer's Certificate issuer is not recognized. More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. error: https://ata-oxy-uyuni01.atass.com/pub/sle12-gpg-pubkey-39db7c82.key: import read failed(2). 2021-06-25 13:43:32,591 [salt.loaded.int.module.cmdmod:842 ][ERROR ][10644] retcode: 1 2021-06-25 13:43:32,591 [salt.state :323 ][ERROR ][10644] {u'pid': 10775, u'retcode': 1, u'stderr': u'curl: (60) Peer\'s Certificate issuer is not recognized.\nMore details here: http://curl.haxx.se/docs/sslcerts.html\n\ncurl<http://curl.haxx.se/docs/ssl certs.html/n/ncurl> performs SSL certificate verification by default, using a "bundle"\n of Certificate Authority (CA) public keys (CA certs). If the default\n bundle file isn\'t adequate, you can specify an alternate file\n using the --cacert option.\nIf this HTTPS server uses a certificate signed by a CA represented in\n the bundle, the certificate verification probably failed due to a\n problem with the certificate (it might be expired, or the name might\n not match the domain name in the URL).\nIf you\'d like to turn off curl\'s verification of the certificate, use\n the -k (or --insecure) option.\nerror: https://ata-oxy-uyuni01.atass.com/pub/sle12-gpg-pubkey-39db7c82.key: import read failed(2).', u'stdout': u''}
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 11:52 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org; Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25 de junio de 2021 11:28:06 (CEST) Simon Avery wrote:
I rolled right back from yesterday to a 2021-05 state, so none of yesterday's attempts affected it - so please disregard any of that.
So the error happened yesterday, and not with the migration you performed today after the rollback, right?
I an confirm that I could not reproduce this problem. In my case uyuni-build- keys installs without that warning.
This error was at the end of ` /usr/lib/susemanager/bin/server-migrator.sh`
Once rebooted, I then ran ` /usr/lib/susemanager/bin/pg-migrate-12-to-13.sh`
which completed normally, and Uyuni started up.
S
-----Original Message----- From: Julio Gonzalez <jgonzalez@suse.com> Sent: 25 June 2021 10:26 To: uyuni-users@opensuse.org; users@lists.uyuni-project.org; Simon Avery <Simon.Avery@atass-sports.co.uk> Subject: [EXTERNAL EMAIL] Re: Unable to restart Uyuni following Uyuni upgrade 2021-06 On viernes, 25
de junio de 2021 11:02:20 (CEST) Simon Avery wrote:
At the end of the migration script, it showed this, prompting me to review scrollback.
Migration went wrong. Please fix the issues and try again.
Hi Julio
On viernes, 25 de junio de 2021 14:58:03 (CEST) Simon Avery wrote:
No - the error happened today during the major update.
I can't reproduce your issue. For me the package installs without that warning.
I suspect more people would have come forwards by now if it was widespread, so unless there's another who steps forward, it's probably not worth spending time over. * Is there anything particular about that system? The complain looks like a problem with the keystore. Not that I know of. I built it in June last year and migrated about 200 centos machines over from Spacewalk, updating as each update goes. It's always worked perfectly with Centos 7 clients, and partially with Centos 8. I rely on it heavily. There may be a problem with the keystore. I must admit this is a weak area of mine. * I think your problem is not the GPG key, but the SSL certificate installed at the proxy the Proxy. Did you follow the instructions to deploy the proxy, or are you using your own SSL certificates? There's no proxy involved. Standalone single Uyuni instance. I used the self-signed certs at the time so there's no third party. I did try today to regenerate this cert, using https://www.uyuni-project.org/uyuni-docs/en/uyuni/administration/ssl-certs-s... - but ran into problems as the second process prompted for a password to the RHN-* key and failed without one, which confused me as I don't recall needing that before and I have no record of what that password might be. Unfortunately I won't have time to dig into that further today, but I'm going to need to come back to it.
On martes, 29 de junio de 2021 13:44:07 (CEST) Simon Avery wrote:
Hi Julio
On viernes, 25 de junio de 2021 14:58:03 (CEST) Simon Avery wrote:
No - the error happened today during the major update.
I can't reproduce your issue. For me the package installs without that warning. I suspect more people would have come forwards by now if it was widespread, so unless there's another who steps forward, it's probably not worth spending time over.
* Is there anything particular about that system? The complain looks like a problem with the keystore.
Not that I know of. I built it in June last year and migrated about 200 centos machines over from Spacewalk, updating as each update goes. It's always worked perfectly with Centos 7 clients, and partially with Centos 8. I rely on it heavily.
There may be a problem with the keystore. I must admit this is a weak area of mine.
Since it was a at warning, I guess it should not affect functionality, but please report back if you see problems.
* I think your problem is not the GPG key, but the SSL certificate installed at the proxy the Proxy. Did you follow the instructions to deploy the proxy, or are you using your own SSL certificates?
There's no proxy involved. Standalone single Uyuni instance. I used the self-signed certs at the time so there's no third party.
True, somehow I assumed "ata-oxy-uyuni01.atass.com" was a proxy.
I did try today to regenerate this cert, using https://www.uyuni-project.org/uyuni-docs/en/uyuni/administration/ssl-certs-> selfsigned.html - but ran into problems as the second process prompted for a password to the RHN-* key and failed without one, which confused me as I don't recall needing that before and I have no record of what that password might be. Unfortunately I won't have time to dig into that further today, but I'm going to need to come back to it.
I am 99.99% sure it's a password you specified when you created the Uyuni Server. If you remember, part of the installation procedure is the YaST setup (right after the packages are installed). The Setup asked questions about data for the certificates and the database, including a couple of passwords. If you don't remember what the password is, check /root/setup_env.sh at the server. If you didn't remove the file, it will contain the password. -- Julio González Gil Release Engineer, SUSE Manager and Uyuni jgonzalez@suse.com
participants (2)
-
Julio Gonzalez
-
Simon Avery