http://bugzilla.opensuse.org/show_bug.cgi?id=1134332
Bug ID: 1134332
Summary: Add kubic_worker and kubic_admin roles
Classification: openSUSE
Product: openSUSE Tumbleweed
Version: Current
Hardware: Other
OS: Other
Status: NEW
Severity: Normal
Priority: P5 - None
Component: Kubic
Assignee: kubic-bugs(a)opensuse.org
Reporter: rbrown(a)suse.com
QA Contact: qa-bugs(a)suse.de
Found By: ---
Blocker: ---
New roles, new fun, for kubicd and kubicctl
--
You are receiving this mail because:
You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=1131330http://bugzilla.suse.com/show_bug.cgi?id=1131330#c4
Michael Matz <matz(a)suse.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |matz(a)suse.com
Flags| |needinfo?(asbeer(a)gmail.com)
--- Comment #4 from Michael Matz <matz(a)suse.com> ---
Have any of you per chance CPUs capable of hardware lock elision? I can't
reproduce the problem with any glibc I find (e.g.
glibc-2.26-lp150.10.13.x86_64 (Leap 15)
glibc-2.26-lp150.11.17.1.x86_64 (Leap 15)
glibc-2.26-18.3.x86_64 (SLE 15 SP1)) on any machine I can find. But those
machines are all either AMD CPUs or Intel CPUs without HLE.
The code path using HLE for the low-level-locks in glibc is behaving different
from the traditional code path, including different behavious in error cases.
So it might be that, but can you please confirm the hardware details? lscpu
output from one machine where you can reproduce it would be good.
--
You are receiving this mail because:
You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=1136641
Arvin Schnell <aschnell(a)suse.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Assignee|bnc-team-screening(a)forge.pr |ghe(a)suse.com
|ovo.novell.com |
--
You are receiving this mail because:
You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1137056
Bug ID: 1137056
Summary: Automatically add a custom generated key to initrd
when installing to LVM/LUKS
Classification: openSUSE
Product: openSUSE Distribution
Version: Leap 15.1
Hardware: x86-64
OS: Other
Status: NEW
Severity: Enhancement
Priority: P5 - None
Component: Installation
Assignee: yast2-maintainers(a)suse.de
Reporter: alexander.shchadilov(a)gmail.com
QA Contact: jsrain(a)suse.com
Found By: ---
Blocker: ---
If an encrypted system partition is configured during installation openSUSE
puts /boot inside of it. While this scheme has certain advantages from the
security side of things, it also brings an inconvenience of entering LUKS
password twice. This inconvenience can be circumvented through adding a custom
key that is used by system to access encrypted partitions; thus GRUB becomes
the only software that asks for password.
openSUSE wiki:
https://en.opensuse.org/SDB:Encrypted_root_file_systemhttp://web.archive.org/web/20190601195245/https://en.opensuse.org/SDB:Encry…
Arch wiki:
https://wiki.archlinux.org/index.php/Dm-crypt/Encrypting_an_entire_system#E…http://web.archive.org/web/20190522050457/https://wiki.archlinux.org/index.…
So it is a feature request for an automated procedure during OS install. There
are no security drawbacks AFAIK.
--
You are receiving this mail because:
You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=1136641
Arvin Schnell <aschnell(a)suse.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |jsrain(a)suse.com
--
You are receiving this mail because:
You are on the CC list for the bug.
http://bugzilla.opensuse.org/show_bug.cgi?id=1130702
Bug ID: 1130702
Summary: Lockup during "zypper dup"
Classification: openSUSE
Product: openSUSE Distribution
Version: Leap 15.1
Hardware: x86-64
OS: SUSE Other
Status: NEW
Severity: Normal
Priority: P5 - None
Component: Other
Assignee: bnc-team-screening(a)forge.provo.novell.com
Reporter: nwr10cst-oslnx(a)yahoo.com
QA Contact: qa-bugs(a)suse.de
Found By: ---
Blocker: ---
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
Firefox/60.0
Build Identifier:
I was updating to the latest build (Build438.2). After downloading the
packages, the installation began. And it got to the point where I was seeing:
( 2/105) Installing: btrfsmaintenance-0.4.2-lp151.1.1.noarch
..<48%>========[-]
It never budged beyond that 48%. I waited 10 minutes, with nothing happening.
There was almost no cpu usage. Looking from another xterm, it appeared that
the hangup was in:
/usr/bin/systemctl try-restart btrfsmaintenance-refresh.service
btrfsmaintenance-refresh.path btrfs-balance.service btrfs-balance.timer
I terminated the "zypper" run with CTRL-C (twice). Restarting gave me an error
about releasing a lock. So I rebooted (hoping to clear locks), and reran the
"zypper" command. This time it completed successfully. But "btrfsmaintenance"
was not listed as an installed package. For safety, I did a force reinstall of
that package (using Yast, so it does not show up in my screenlog).
As far as I can tell, all is now fine. But this should not have happened. I'm
not sure where the blame lies.
I will attach a couple of files with information.
Reproducible: Didn't try
--
You are receiving this mail because:
You are on the CC list for the bug.
http://bugzilla.suse.com/show_bug.cgi?id=1136641http://bugzilla.suse.com/show_bug.cgi?id=1136641#c3
Arvin Schnell <aschnell(a)suse.com> changed:
What |Removed |Added
----------------------------------------------------------------------------
Priority|P5 - None |P2 - High
Component|Installation |Basesystem
Assignee|yast-internal(a)suse.de |bnc-team-screening(a)forge.pr
| |ovo.novell.com
QA Contact|jsrain(a)suse.com |qa-bugs(a)suse.de
--- Comment #3 from Arvin Schnell <aschnell(a)suse.com> ---
Thanks for the report including logs.
I am able to reproduce the problem with Leap 15.1. With Leap 15.0
the problem does not appear. One problem is that the output of pvs
has changed. In Leap 15.0 pvs only reports the physical volumes in
the RAID. In Leap 15.1 it also reports the physical volumes on the
partitions used for the RAID. For this YaST could simple be updated.
But when activating LVM in the installed system I get errors from
'vgchange -a y' that some physical volumes have duplicates and thus
activation fails (on a named RAID, on an unnamed RAID it works):
# vgchange -ay
WARNING: found device with duplicate /dev/sdc2
WARNING: found device with duplicate /dev/md127
WARNING: Disabling lvmetad cache which does not support duplicate PVs.
WARNING: Scan found duplicate PVs.
WARNING: Not using lvmetad because cache update failed.
WARNING: Not using device /dev/sdc2 for PV
ECQlxl-NZZr-fLCA-Bc2P-L1yQ-szWg-cXKEQC.
WARNING: Not using device /dev/md127 for PV
ECQlxl-NZZr-fLCA-Bc2P-L1yQ-szWg-cXKEQC.
WARNING: PV ECQlxl-NZZr-fLCA-Bc2P-L1yQ-szWg-cXKEQC prefers device /dev/sdb2
because of previous preference.
WARNING: PV ECQlxl-NZZr-fLCA-Bc2P-L1yQ-szWg-cXKEQC prefers device /dev/sdb2
because of previous preference.
Cannot activate LVs in VG vg-b while PVs appear on duplicate devices.
0 logical volume(s) in volume group "vg-b" now active
1 logical volume(s) in volume group "vg-a" now active
This looks like a problem of the openSUSE base system.
--
You are receiving this mail because:
You are on the CC list for the bug.