Re: [opensuse] Root and boot partitions on a sw raid volume (opensuse 12.2 - 12.3 rc1 64bit)
The tests i'm doing tell me that there are severe problem using yast installation. Should i consider this a bug or simply i'm trying to do something which is deliberately unsupported ?
I would open a bugreport.
I'm going to wait for the 12.3 final release and, if the behavior will be the same i'll open a bugreport I have already tried 12.3 rc2 with no luck In the meanwhile if someone of you have encountered a similar issue or have a similar partition layout please share your experiences, it would be very useful. What sounds very very strange to me is that nobody has already signaled this issue like me. I think every system with a sw raid configuration should have also the boot partition on top of a raid1 volume. I don't think to have an uncommon partition layout... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jjletho67-esus@yahoo.it wrote:
The tests i'm doing tell me that there are severe problem using yast installation. Should i consider this a bug or simply i'm trying to do something which is deliberately unsupported ?
I would open a bugreport.
I'm going to wait for the 12.3 final release and, if the behavior will be the same i'll open a bugreport I have already tried 12.3 rc2 with no luck
I wouldn't expect any such major changes. Maybe you could try 12.2 and see how that goes.
In the meanwhile if someone of you have encountered a similar issue or have a similar partition layout please share your experiences, it would be very useful. What sounds very very strange to me is that nobody has already signaled this issue like me. I think every system with a sw raid configuration should have also the boot partition on top of a raid1 volume. I don't think to have an uncommon partition layout...
I agree. -- Per Jessen, Zürich (1.6°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Per Jessen wrote:
jjletho67-esus@yahoo.it wrote:
I think every system with a sw raid configuration should have also the boot partition on top of a raid1 volume. I don't think to have an uncommon partition layout...
I agree.
Why so? I use RAID to protect my data. I use a separate backup image to protect my system. I confess I'm also influenced by the apparent difficulties in setting up and maintaining bootable RAIDs. I guess it might depend on your goals when using RAID. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Dave Howorth wrote:
Per Jessen wrote:
jjletho67-esus@yahoo.it wrote:
I think every system with a sw raid configuration should have also the boot partition on top of a raid1 volume. I don't think to have an uncommon partition layout...
I agree.
Why so?
Due to harddisk prices, I think RAID configurations are quite common.
I use RAID to protect my data. I use a separate backup image to protect my system. I confess I'm also influenced by the apparent difficulties in setting up and maintaining bootable RAIDs.
I guess it might depend on your goals when using RAID.
I only have one - availability/up-time. -- Per Jessen, Zürich (1.6°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I think every system with a sw raid configuration should have also
the boot partition on top of a raid1 volume. I don't think to have an uncommon partition layout...
I agree.
Why so?
Due to harddisk prices, I think RAID configurations are quite common.
I agree. Nowadays with low cost sata disks and software raid a simple workstation can easily have a reasonably safe raid configuration
I use RAID to protect my data. I use a separate backup image to protect my system. I confess I'm also influenced by the apparent difficulties in setting up and maintaining bootable RAIDs.
I guess it might depend on your goals when using RAID.
I only have one - availability/up-time.
Backup is for recovery from file system corruption or accidental deletion of file Raid is against hardware failure and gives you a running system immediately after a disk failure...if you can properly boot! Nowadays I don't think setting up and maintaining a bootable RAID is so difficult. Most of the major distros allow you to setup raid while installing without any special tricks (i'm personally using it on redhat based distros since 2008), for this reason i'm so surprised that opensuse apparently does not support it. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jjletho67-esus@yahoo.it wrote:
Nowadays I don't think setting up and maintaining a bootable RAID is so difficult. Most of the major distros allow you to setup raid while installing without any special tricks (i'm personally using it on redhat based distros since 2008), for this reason i'm so surprised that opensuse apparently does not support it.
I'm certain we support it (the yast partitioner has all the necessary functionality), but there is clearly some kind of problem. -- Per Jessen, Zürich (1.6°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Dave Howorth wrote:
jjletho67-esus@yahoo.it wrote:
I think every system with a sw raid configuration should have also the boot partition on top of a raid1 volume. I don't think to have an uncommon partition layout... I agree. Why so? I use RAID to protect my data. I use a separate backup image to
Per Jessen wrote: protect my system. I confess I'm also influenced by the apparent difficulties in setting up and maintaining bootable RAIDs.
I guess it might depend on your goals when using RAID.
IIRC, /boot can be installed on RAID 1. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I'm going to wait for the 12.3 final release and, if the behavior will be the same i'll open a bugreport I have already tried 12.3 rc2 with no luck
I wouldn't expect any such major changes. Maybe you could try 12.2 and see how that goes.
I've already tried :-) 12.2 12.3rc1 12.3rc2 All this versions exhibit a very similar behavior -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jjletho67-esus@yahoo.it wrote:
I'm going to wait for the 12.3 final release and, if the behavior will be the same i'll open a bugreport I have already tried 12.3 rc2 with no luck
I wouldn't expect any such major changes. Maybe you could try 12.2 and see how that goes.
I've already tried :-)
12.2 12.3rc1 12.3rc2
All this versions exhibit a very similar behavior
I guess you have already googled - "opensuse grub2 raid" ? Personally I would suspect something in the grub config. -- Per Jessen, Zürich (1.7°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
In my opinion there are two different problem:
I'm going to wait for the 12.3 final release and, if the behavior will be the same i'll open a bugreport I have already tried 12.3 rc2 with no luck
I wouldn't expect any such major changes. Maybe you could try 12.2 and see how that goes.
I've already tried :-)
12.2 12.3rc1 12.3rc2
All this versions exhibit a very similar behavior
I guess you have already googled - "opensuse grub2 raid" ? Personally I would suspect something in the grub config.
I have just tested OpenSuse 12.3 final release and it exhibits the same behavior. I've done several tests and analysis and i guess there are, in my opinion, two distinct but interconnected problem which i'm going to summarize here: -Opensuse 12.3 final. -Installation through yast -partition layout (implemented through yast during the installation): /boot -> /dev/md0 (raid1) -> sda1, sdb1 swap -> /dev/md1 (raid1) -> sda2,sdb2 / -> logvol1 -> volgroup1 -> /dev/md2 -> sda3,sdb3 /home -> logvol2 ->volgroup1 -> /dev/md2 -> sda3,sdb3 /tmp -> logvol3 -> volgroup1 -> /dev/md2 -> sda3,sdb3 /var -> logvol4 -> volgroup1 -> /dev/md2 -> sda3,sdb3 sda4,sdb4 unformatted Bootloader Grub2 -> MBR By this way i obtain an unbootable system with at least two major problems: 1) grub2 is installed only on the first disk 2) /dev/md0 (which is mounted as /boot) is not correctly initialized and, at first reboot, only /dev/sdb1 is active part of the raid volume. The system is not bootable because grub expects to find its files and the kernel on /dev/sda1 which is empty. I recovered from this situation booting from the recovery cd and issuing this command to hotadd /dev/sda1 to /dev/md0. I waited until the synchronization was completed and then the system was able to boot Note that /dev/sda1 was already initialized as part of /dev/md0 but not synced by yast installer In the case /dev/sda is failed the system cannot boot from /dev/sdb. This is due the lack of installation of GRUB on /dev/sdb mbr. I Issued a grub2-install /dev/sdb without touching the grub config and the system was able to boot also with the first disk pulled out. I'm going to open a bug for this. Of course i'm not sure the workarounds i described are reliable enough. For example whats happen after a kernel update ? (What will happen after perl-bootloader is called ?)I find the configuration scripts of grub2 really cryptic, i can't honestly say if they are correctly configured! Any comments or similar experiences are welcome -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jjletho wrote:
I have just tested OpenSuse 12.3 final release and it exhibits the same behavior.
I've done several tests and analysis and i guess there are, in my opinion, two distinct but interconnected problem which i'm going to summarize here:
-Opensuse 12.3 final. -Installation through yast -partition layout (implemented through yast during the installation):
/boot -> /dev/md0 (raid1) -> sda1, sdb1 swap -> /dev/md1 (raid1) -> sda2,sdb2 / -> logvol1 -> volgroup1 -> /dev/md2 -> sda3,sdb3 /home -> logvol2 ->volgroup1 -> /dev/md2 -> sda3,sdb3 /tmp -> logvol3 -> volgroup1 -> /dev/md2 -> sda3,sdb3 /var -> logvol4 -> volgroup1 -> /dev/md2 -> sda3,sdb3 sda4,sdb4 unformatted
Bootloader Grub2 -> MBR
By this way i obtain an unbootable system with at least two major problems:
1) grub2 is installed only on the first disk
That must be a bug, please report it.
2) /dev/md0 (which is mounted as /boot) is not correctly initialized and, at first reboot, only /dev/sdb1 is active part of the raid volume.
sda1 is not even part of the config? Your array is running degraded? I'm asking because it takes very specific commands to set up a RAID1 with only _one_ drive.
The system is not bootable because grub expects to find its files and the kernel on /dev/sda1 which is empty.
Right.
I recovered from this situation booting from the recovery cd and issuing this command to hotadd /dev/sda1 to /dev/md0.
Sounds good.
I waited until the synchronization was completed and then the system was able to boot
Note that /dev/sda1 was already initialized as part of /dev/md0 but not synced by yast installer
Yast doesn't do any sync'ing, that's handled by the MD software. What was the status of sda1 before you hotadd'ed it? Did you have to remove it first?
In the case /dev/sda is failed the system cannot boot from /dev/sdb. This is due the lack of installation of GRUB on /dev/sdb mbr.
Yes.
I Issued a grub2-install /dev/sdb without touching the grub config and the system was able to boot also with the first disk pulled out.
I'm going to open a bug for this. Of course i'm not sure the workarounds i described are reliable enough.
Yes please, open a bugreport. Your work-arounds look good to me, but they're less important. Judging by the lack of response to your postings here, very few people are installing onto RAID1. -- Per Jessen, Zürich (4.4°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I'm running oS 12.2 on a small server at home with the Adaptec HostRaid (fake RAID) adapter that's onboard the SuperMicro PDSMI+ motherboard, with two Seagate SATA drives in RAID1. Is there any way to test this without unplugging one of the drives and forcing the array into recovery mode? Chris
Per Jessen 03/21/13 2:05 PM >>> jjletho wrote:
I have just tested OpenSuse 12.3 final release and it exhibits the same behavior.
I've done several tests and analysis and i guess there are, in my opinion, two distinct but interconnected problem which i'm going to summarize here:
-Opensuse 12.3 final. -Installation through yast -partition layout (implemented through yast during the installation):
/boot -> /dev/md0 (raid1) -> sda1, sdb1 swap -> /dev/md1 (raid1) -> sda2,sdb2 / -> logvol1 -> volgroup1 -> /dev/md2 -> sda3,sdb3 /home -> logvol2 ->volgroup1 -> /dev/md2 -> sda3,sdb3 /tmp -> logvol3 -> volgroup1 -> /dev/md2 -> sda3,sdb3 /var -> logvol4 -> volgroup1 -> /dev/md2 -> sda3,sdb3 sda4,sdb4 unformatted
Bootloader Grub2 -> MBR
By this way i obtain an unbootable system with at least two major problems:
1) grub2 is installed only on the first disk
That must be a bug, please report it.
2) /dev/md0 (which is mounted as /boot) is not correctly initialized and, at first reboot, only /dev/sdb1 is active part of the raid volume.
sda1 is not even part of the config? Your array is running degraded? I'm asking because it takes very specific commands to set up a RAID1 with only _one_ drive.
The system is not bootable because grub expects to find its files and the kernel on /dev/sda1 which is empty.
Right.
I recovered from this situation booting from the recovery cd and issuing this command to hotadd /dev/sda1 to /dev/md0.
Sounds good.
I waited until the synchronization was completed and then the system was able to boot
Note that /dev/sda1 was already initialized as part of /dev/md0 but not synced by yast installer
Yast doesn't do any sync'ing, that's handled by the MD software. What was the status of sda1 before you hotadd'ed it? Did you have to remove it first?
In the case /dev/sda is failed the system cannot boot from /dev/sdb. This is due the lack of installation of GRUB on /dev/sdb mbr.
Yes.
I Issued a grub2-install /dev/sdb without touching the grub config and the system was able to boot also with the first disk pulled out.
I'm going to open a bug for this. Of course i'm not sure the workarounds i described are reliable enough.
Yes please, open a bugreport. Your work-arounds look good to me, but they're less important. Judging by the lack of response to your postings here, very few people are installing onto RAID1. -- Per Jessen, Zürich (4.4°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Christopher Myers wrote:
I'm running oS 12.2 on a small server at home with the Adaptec HostRaid (fake RAID) adapter that's onboard the SuperMicro PDSMI+ motherboard, with two Seagate SATA drives in RAID1. Is there any way to test this without unplugging one of the drives and forcing the array into recovery mode?
Chris, as you're not using software RAID, I don't quite see what you want to test? -- Per Jessen, Zürich (3.2°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
2) /dev/md0 (which is mounted as /boot) is not correctly initialized and, at first reboot, only /dev/sdb1 is active part of the raid volume.
sda1 is not even part of the config? Your array is running degraded? I'm asking because it takes very specific commands to set up a RAID1 with only _one_ drive.
After the first reboot this is the situation: ******************** linux:~ # cat /proc/mdstat Personalities : [raid1] md125 : active (auto-read-only) raid1 sdb3[1] sda3[0] 23069568 blocks super 1.0 [2/2] [UU] resync=PENDING bitmap: 1/1 pages [4KB], 65536KB chunk md126 : active (auto-read-only) raid1 sdb1[1] 697280 blocks super 1.0 [2/1] [_U] bitmap: 1/1 pages [4KB], 65536KB chunk md127 : active (auto-read-only) raid1 sdb2[1] sda2[0] 1052608 blocks super 1.0 [2/2] [UU] resync=PENDING bitmap: 1/1 pages [4KB], 65536KB chunk unused devices: <none> ******************** Please note that /dev/md126 is the raid volume of our interest which i configured as /dev/md0, to be mounted as /boot ******************** linux:~ # mdadm --detail /dev/md126 /dev/md126: Version : 1.0 Creation Time : Fri Mar 22 10:50:00 2013 Raid Level : raid1 Array Size : 697280 (681.05 MiB 714.01 MB) Used Dev Size : 697280 (681.05 MiB 714.01 MB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Mar 22 11:01:17 2013 State : active, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : linux:0 UUID : 70fb55c6:47dfef14:7f280172:f5642bcd Events : 8 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 ******************** So the array is in degraded mode and /dev/sda1 looks like it was removed If I examine directly the two single devices I obtain this: ******************** linux:~ # mdadm --examine /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : 70fb55c6:47dfef14:7f280172:f5642bcd Name : linux:0 Creation Time : Fri Mar 22 10:50:00 2013 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1394664 (681.10 MiB 714.07 MB) Array Size : 697280 (681.05 MiB 714.01 MB) Used Dev Size : 1394560 (681.05 MiB 714.01 MB) Super Offset : 1394672 sectors State : clean Device UUID : c9a37b38:98bda5c8:8e38f11b:6701829e Internal Bitmap : -8 sectors from superblock Update Time : Fri Mar 22 11:01:17 2013 Checksum : 976c3f59 - correct Events : 8 Device Role : Active device 1 Array State : .A ('A' == active, '.' == missing) linux:~ # mdadm --examine /dev/sda1 /dev/sda1: Magic : a92b4efc Version : 1.0 Feature Map : 0x1 Array UUID : 70fb55c6:47dfef14:7f280172:f5642bcd Name : linux:0 Creation Time : Fri Mar 22 10:50:00 2013 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1394664 (681.10 MiB 714.07 MB) Array Size : 697280 (681.05 MiB 714.01 MB) Used Dev Size : 1394560 (681.05 MiB 714.01 MB) Super Offset : 1394672 sectors State : active Device UUID : de82b920:cd0eb164:f7fbfd57:7c27ca4d Internal Bitmap : -8 sectors from superblock Update Time : Fri Mar 22 10:50:05 2013 Checksum : 709567b - correct Events : 1 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing) ******************** So I can conclude that the array was created with both sda1 and sdb1, but, for some unknown reasons, sda1 has been pulled out before the first synchronization.
Yes please, open a bugreport. Your work-arounds look good to me, but they're less important. Judging by the lack of response to your postings here, very few people are installing onto RAID1.
I'm going to open the bug. I Can't explain why so few people are using a raid configuration. At the beginning it was honestly very complicated, but since 2007 - 2008 a lot of distros have started to support it directly from the installation process, allowing an easier configuration. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jjletho wrote:
2) /dev/md0 (which is mounted as /boot) is not correctly initialized and, at first reboot, only /dev/sdb1 is active part of the raid volume.
sda1 is not even part of the config? Your array is running degraded? I'm asking because it takes very specific commands to set up a RAID1 with only _one_ drive.
After the first reboot this is the situation:
******************** linux:~ # cat /proc/mdstat Personalities : [raid1] md125 : active (auto-read-only) raid1 sdb3[1] sda3[0] 23069568 blocks super 1.0 [2/2] [UU] resync=PENDING bitmap: 1/1 pages [4KB], 65536KB chunk
md126 : active (auto-read-only) raid1 sdb1[1] 697280 blocks super 1.0 [2/1] [_U] bitmap: 1/1 pages [4KB], 65536KB chunk
md127 : active (auto-read-only) raid1 sdb2[1] sda2[0] 1052608 blocks super 1.0 [2/2] [UU] resync=PENDING bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none> ********************
Please note that /dev/md126 is the raid volume of our interest which i configured as /dev/md0, to be mounted as /boot
I have seen situations with those numbers (md125/6/7) before - I am not sure, but I think they were caused by left-over superblocks on drives no longer configured/used for raid.
So I can conclude that the array was created with both sda1 and sdb1, but, for some unknown reasons, sda1 has been pulled out before the first synchronization.
Or rather, it was never added properly. Those device number are a clear indication that something went wrong. If you search bugzilla for e.g. md126, I'm sure you'll find a couple of hits. If you can repeat the exercise, I would do a normal installation start, then swap to a console (Ctrl-Alt-F1) and check the md status - if you see any unwanted arrays, stop them, then swap back (Alt-F7) and continue with the partitioner. Also, the yast logs will very likely tell you what the problem is. -- Per Jessen, Zürich (12.2°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 22/03/13 15:45, Per Jessen wrote:
I have seen situations with those numbers (md125/6/7) before - I am not sure, but I think they were caused by left-over superblocks on drives no longer configured/used for raid.
I have two non-system RAID1 arrays [/home and /music]. Whenever I upgrade the system mdm assigns them to md126 and md127. In the past, I've had to jump through hoops with mdadm to get them back to md0 and md1, but following the latest, fresh installation of 12.3, they reverted to md0 and md1 following a reboot. Bob -- Bob Williams System: Linux 3.7.10-1.1-desktop Distro: openSUSE 12.3 (x86_64) with KDE Development Platform: 4.10.00 "release 1" Uptime: 06:00am up 3 days 20:55, 3 users, load average: 1.05, 0.54, 0.32 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
2) /dev/md0 (which is mounted as /boot) is not correctly initialized and, at first reboot, only /dev/sdb1 is active part of the raid volume.
sda1 is not even part of the config? Your array is running degraded? I'm asking because it takes very specific commands to set up a RAID1 with only _one_ drive.
After the first reboot this is the situation:
******************** linux:~ # cat /proc/mdstat Personalities : [raid1] md125 : active (auto-read-only) raid1 sdb3[1] sda3[0] 23069568 blocks super 1.0 [2/2] [UU] resync=PENDING bitmap: 1/1 pages [4KB], 65536KB chunk
md126 : active (auto-read-only) raid1 sdb1[1] 697280 blocks super 1.0 [2/1] [_U] bitmap: 1/1 pages [4KB], 65536KB chunk
md127 : active (auto-read-only) raid1 sdb2[1] sda2[0] 1052608 blocks super 1.0 [2/2] [UU] resync=PENDING bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none> ********************
Please note that /dev/md126 is the raid volume of our interest which i configured as /dev/md0, to be mounted as /boot
I have seen situations with those numbers (md125/6/7) before - I am not sure, but I think they were caused by left-over superblocks on drives no longer configured/used for raid.
I'm sorry, I have not fully explained the situation. The mdstat file displayed above was the one i obtained after having booted the machine with the recovery cd (the system alone was not yet able to boot properly) device names like md125/6/7 are used when raid volume are autodetected by the recovery cd which has not an mdadm.conf file with the right definitions.
So I can conclude that the array was created with both sda1 and sdb1, but, for some unknown reasons, sda1 has been pulled out before the first synchronization.
Or rather, it was never added properly. Those device number are a clear indication that something went wrong. If you search bugzilla for e.g. md126, I'm sure you'll find a couple of hits.
If you can repeat the exercise, I would do a normal installation start, then swap to a console (Ctrl-Alt-F1) and check the md status - if you see any unwanted arrays, stop them, then swap back (Alt-F7) and continue with the partitioner. Also, the yast logs will very likely tell you what the problem is.
I'll do some more tests, but i'm already quite sure that disks were completely zeroed before of the installation because in a previous test i found that was almost impossible to use the yast partitioner on an already partitioned disk (probably it was another yast bug) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
I'm going to open the bug.
I've just opend a bug https://bugzilla.novell.com/show_bug.cgi?id=811830 Please post every details or similar experiences which you think could be significant -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (7)
-
Bob Williams
-
Christopher Myers
-
Dave Howorth
-
James Knott
-
jjletho
-
jjletho67-esus@yahoo.it
-
Per Jessen