[uyuni-users] Re-registration of system with the same minion-id results in unmanagable system
Heya, running Uyuni 2020.04 we encounter the following issue: If a user 'recycles' a system (through reprovisioning) the following happens: 1) We do not explicitly remove the current system profile in Uyuni nor remove the minion from salt 2) The system will be reprovisioned through our internal deployment system 3) After successful provisioning, we bootstrap the host with the same minion-id The result: The already existing system profile is not updated. Crucial information (like the dbus/systemd machine-id) is not refreshed in the profile leading to 'interesting' errors like these: ``` local: Data failed to compile: ---------- Specified SLS packages.packages_b60982e0e0334b5baffb24ca15e2b98a in saltenv base is not available on the salt master or through a configured fileserver ---------- Specified SLS custom.custom_b60982e0e0334b5baffb24ca15e2b98a in saltenv base is not available on the salt master or through a configured fileserver ``` This is happening because Uyuni is still using the machine-id of the previously deployed system for the generation of salt states. I raised the Taskomatic log level to debug in the hopes of seeing 'something' [TM], but grepping these logs using the machine-ids/hostnames didn't help me either. It seems as if Uyuni is simply skipping the bootstrap/registration procedure if the minion-id is already existing. There does not seem to be the possibility to 'force' a re-registration. So, my question to the Uyuni devs would be: Is the only way to work around this the complete deletion of the system profile or is there a magic flag/configuration setting hidden somewhere that i could specify? Regards, Mattias
Hi Mattias, You can achieve what you mentioned using management keys. You need to create the reactivation key and then add that key as grain in minion configuration. grains: susemanager: - management_key: "1-myreactivation-key" Now about the error, yes susemanager use machine-id to identify a minion in combination with the minion id. In your case, susemanager sees that minion with this id already exists with a different machine_id so it simply logged an error that minion already exists and do nothing. You could probably see the exact error message rhn_web_ui.log file. Now later when salt state file, try to use something like this ` custom.custom_{{ grains['machine_id'] }}`, it gets the new machine_id grain but susemanager has no record of new machine_id and because all the generated files still points to old machine_id, hence the error. Regards, Abid On 7/6/20 6:08 PM, Mattias Giese wrote:
Heya,
running Uyuni 2020.04 we encounter the following issue: If a user 'recycles' a system (through reprovisioning) the following happens:
1) We do not explicitly remove the current system profile in Uyuni nor remove the minion from salt 2) The system will be reprovisioned through our internal deployment system 3) After successful provisioning, we bootstrap the host with the same minion-id
The result:
The already existing system profile is not updated. Crucial information (like the dbus/systemd machine-id) is not refreshed in the profile leading to 'interesting' errors like these:
``` local: Data failed to compile: ---------- Specified SLS packages.packages_b60982e0e0334b5baffb24ca15e2b98a in saltenv base is not available on the salt master or through a configured fileserver ---------- Specified SLS custom.custom_b60982e0e0334b5baffb24ca15e2b98a in saltenv base is not available on the salt master or through a configured fileserver ```
This is happening because Uyuni is still using the machine-id of the previously deployed system for the generation of salt states. I raised the Taskomatic log level to debug in the hopes of seeing 'something' [TM], but grepping these logs using the machine-ids/hostnames didn't help me either. It seems as if Uyuni is simply skipping the bootstrap/registration procedure if the minion-id is already existing. There does not seem to be the possibility to 'force' a re-registration.
So, my question to the Uyuni devs would be:
Is the only way to work around this the complete deletion of the system profile or is there a magic flag/configuration setting hidden somewhere that i could specify?
Regards,
Mattias
-- Abid Mehmood SUSE Software Solutions Germany GmbH Maxfeldstr. 5 90409 Nuremberg Germany (HRB 36809, AG Nürnberg) Managing Director: Felix Imendörffer -- To unsubscribe, e-mail: uyuni-users+unsubscribe@opensuse.org To contact the owner, e-mail: uyuni-users+owner@opensuse.org
Heya, On 06/07/20 19:40:16, Abid Mehmood wrote:
Hi Mattias,
You can achieve what you mentioned using management keys. You need to create the reactivation key and then add that key as grain in minion configuration.
grains: susemanager: - management_key: "1-myreactivation-key"
Now about the error, yes susemanager use machine-id to identify a minion in combination with the minion id. In your case, susemanager sees that minion with this id already exists with a different machine_id so it simply logged an error that minion already exists and do nothing. You could probably see the exact error message rhn_web_ui.log file. Now later when salt state file, try to use something like this ` custom.custom_{{ grains['machine_id'] }}`, it gets the new machine_id grain but susemanager has no record of new machine_id and because all the generated files still points to old machine_id, hence the error.
Yup, that part i already figured out. So as far as i can see there is no way to simply force the deletion of the old system profile (i am not interested in associating the old system profile in Uyuni with the new system). In our case i will force the deletion within our internal deployment process which interacts with Uyuni. Thanks and regards, Mattias
participants (2)
-
Abid Mehmood
-
Mattias Giese