Has an successfully gotten an autoinstall of LVM over software RAID? I can get sw RAID alone to work, but not LVM over sw RAID. I'm working from Suse 8.1 Professional (boxed). The configuration snippet follows at the end of this email. Note that this is a class I had been trying to merge with other profiles, so it is not a comlete autoinstall configuration. I get a single error when I run this through "xmllint --valid sw-raid.xml": sw-raid.xml:145: validity error: Element raid_device content doesn't follow the DTD Expecting (device_name , parity_algorithm , persistent_superblock , raid_level , chunk_size , mount , format , filesystem), got (device_name parity_algorithm persistent_superblock raid_level chunk_size mount format lvm_group ) </raid_device> ^ I had to put multiple logical_volumes elements in to get this far; it looks like there's a bug in the DTD (the following patch against autoyast2-installation-2.6.40-0 might fix it): --- /usr/share/YaST2/include/autoinstall/misc2.dtd.orig 2003-01-01 19:40:39.000000000 +0000 +++ /usr/share/YaST2/include/autoinstall/misc2.dtd 2003-02-04 20:50:57.000000000 +0000 @@ -20,7 +20,7 @@ <!ATTLIST inetd_services config:type CDATA #REQUIRED
-<!ELEMENT logical_volumes (lv)> +<!ELEMENT logical_volumes (lv+)> <!ATTLIST logical_volumes config:type CDATA #REQUIRED
I'm not sure how strictly (or if) the auto-installer follows the DTD, so I don't know if there are other fixes needed besides the DTD. In spite of this and a few other minor issues, I have to say I'm quite impressed with Suse's auto-install system; Anas has done a great job! (Especially as it appears to be a one-person show, judging from the traffic on this list.) Best regards, Brian sw-raid.xml: <?xml version="1.0"?> <!DOCTYPE profile SYSTEM "/usr/share/YaST2/include/autoinstall/profile.dtd"> <!-- $Header: /cvsroot/config/classes/lb/var/lib/autoinstall/repository/templates/sw-raid.xml,v 1.2 2003/02/03 04:56:22 bstrand Exp $ Software RAID1 over two IDE drives (/dev/hde, /dev/hdg) on the onboard Promise IDE card. --> <profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns"> <install> <lvm config:type="list"> <lvm_group> <lvm_name>switch</lvm_name> <pesize>4M</pesize> <logical_volumes config:type="list"> <lv> <lv_name>root</lv_name> <lv_size>1000M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>usr</lv_name> <lv_size>1800M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/usr</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>usr-local</lv_name> <lv_size>200M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/usr/local</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>opt</lv_name> <lv_size>1000M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/opt</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>var</lv_name> <lv_size>2000M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/var</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>home</lv_name> <lv_size>1000M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/home</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>tmp</lv_name> <lv_size>800M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/tmp</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>tmp</lv_name> <lv_size>200M</lv_size> <lv_fs>swap</lv_fs> <lv_mount>/tmp</lv_mount> </lv> </logical_volumes> </lvm_group> </lvm> <partitioning config:type="list"> <drive> <device>/dev/hde</device> <partitions config:type="list"> <partition> <format config:type="boolean">false</format> <mount/> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">1</partition_nr> <raid_name>/dev/md0</raid_name> <size>100M</size> </partition> <partition> <format config:type="boolean">false</format> <mount/> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">2</partition_nr> <raid_name>/dev/md1</raid_name> <size>max</size> </partition> </partitions> <use>all</use> </drive> <drive> <device>/dev/hdg</device> <partitions config:type="list"> <partition> <format config:type="boolean">false</format> <mount/> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">1</partition_nr> <raid_name>/dev/md0</raid_name> <size>100M</size> </partition> <partition> <format config:type="boolean">false</format> <mount/> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">2</partition_nr> <raid_name>/dev/md1</raid_name> <size>max</size> </partition> </partitions> <use>all</use> </drive> </partitioning> <raid config:type="list"> <raid_device> <device_name>/dev/md0</device_name> <parity_algorithm>left-asymmetric</parity_algorithm> <persistent_superblock>true</persistent_superblock> <raid_level>1</raid_level> <chunk_size>4</chunk_size> <mount>/boot</mount> <format config:type="boolean">true</format> <filesystem config:type="symbol">ext2</filesystem> </raid_device> <raid_device> <device_name>/dev/md1</device_name> <parity_algorithm>left-asymmetric</parity_algorithm> <persistent_superblock>true</persistent_superblock> <raid_level>1</raid_level> <chunk_size>4</chunk_size> <mount/> <format config:type="boolean">false</format> <lvm_group>switch</lvm_group> </raid_device> </raid> </install> </profile>
* Brian Strand <bstrand@switchmanagement.com> [Feb 04. 2003 22:01]:
Has an successfully gotten an autoinstall of LVM over software RAID? This is not supported and I am not sure if it works. Maybe there is some workaround for that, but I never tested...
I can get sw RAID alone to work, but not LVM over sw RAID. I'm working from Suse 8.1 Professional (boxed). The configuration snippet follows at the end of this email. Note that this is a class I had been trying to merge with other profiles, so it is not a comlete autoinstall configuration. I get a single error when I run this through "xmllint --valid sw-raid.xml":
sw-raid.xml:145: validity error: Element raid_device content doesn't follow the DTD Expecting (device_name , parity_algorithm , persistent_superblock , raid_level , chunk_size , mount , format , filesystem), got (device_name parity_algorithm persistent_superblock raid_level chunk_size mount format lvm_group ) </raid_device> ^
I had to put multiple logical_volumes elements in to get this far; it looks like there's a bug in the DTD (the following patch against autoyast2-installation-2.6.40-0 might fix it):
--- /usr/share/YaST2/include/autoinstall/misc2.dtd.orig 2003-01-01 19:40:39.000000000 +0000 +++ /usr/share/YaST2/include/autoinstall/misc2.dtd 2003-02-04 20:50:57.000000000 +0000 @@ -20,7 +20,7 @@ <!ATTLIST inetd_services config:type CDATA #REQUIRED
-<!ELEMENT logical_volumes (lv)> +<!ELEMENT logical_volumes (lv+)> <!ATTLIST logical_volumes config:type CDATA #REQUIRED
I'm not sure how strictly (or if) the auto-installer follows the DTD, so I don't know if there are other fixes needed besides the DTD.
It's flexible, DTD is good to verify syntax before you start installation, but no validation is done during installation.
In spite of this and a few other minor issues, I have to say I'm quite impressed with Suse's auto-install system; Anas has done a great job! (Especially as it appears to be a one-person show, judging from the traffic on this list.)
Thanks, Not really a one person show, autoinstall depends heavily on core parts of YaST2 and other modules etc. etc. So many people are invloved, I am just making sure everything fits togather and add the missing parts :-) Anas
Best regards, Brian
sw-raid.xml:
<?xml version="1.0"?> <!DOCTYPE profile SYSTEM "/usr/share/YaST2/include/autoinstall/profile.dtd"> <!--
$Header: /cvsroot/config/classes/lb/var/lib/autoinstall/repository/templates/sw-raid.xml,v 1.2 2003/02/03 04:56:22 bstrand Exp $
Software RAID1 over two IDE drives (/dev/hde, /dev/hdg) on the onboard Promise IDE card.
--> <profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns"> <install> <lvm config:type="list"> <lvm_group> <lvm_name>switch</lvm_name> <pesize>4M</pesize> <logical_volumes config:type="list"> <lv> <lv_name>root</lv_name> <lv_size>1000M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>usr</lv_name> <lv_size>1800M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/usr</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>usr-local</lv_name> <lv_size>200M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/usr/local</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>opt</lv_name> <lv_size>1000M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/opt</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>var</lv_name> <lv_size>2000M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/var</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>home</lv_name> <lv_size>1000M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/home</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>tmp</lv_name> <lv_size>800M</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/tmp</lv_mount> </lv> </logical_volumes> <logical_volumes config:type="list"> <lv> <lv_name>tmp</lv_name> <lv_size>200M</lv_size> <lv_fs>swap</lv_fs> <lv_mount>/tmp</lv_mount> </lv> </logical_volumes> </lvm_group> </lvm>
<partitioning config:type="list"> <drive> <device>/dev/hde</device> <partitions config:type="list"> <partition> <format config:type="boolean">false</format> <mount/> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">1</partition_nr> <raid_name>/dev/md0</raid_name> <size>100M</size> </partition> <partition> <format config:type="boolean">false</format> <mount/> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">2</partition_nr> <raid_name>/dev/md1</raid_name> <size>max</size> </partition> </partitions> <use>all</use> </drive> <drive> <device>/dev/hdg</device> <partitions config:type="list"> <partition> <format config:type="boolean">false</format> <mount/> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">1</partition_nr> <raid_name>/dev/md0</raid_name> <size>100M</size> </partition> <partition> <format config:type="boolean">false</format> <mount/> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">2</partition_nr> <raid_name>/dev/md1</raid_name> <size>max</size> </partition> </partitions> <use>all</use> </drive> </partitioning>
<raid config:type="list"> <raid_device> <device_name>/dev/md0</device_name> <parity_algorithm>left-asymmetric</parity_algorithm> <persistent_superblock>true</persistent_superblock> <raid_level>1</raid_level> <chunk_size>4</chunk_size> <mount>/boot</mount> <format config:type="boolean">true</format> <filesystem config:type="symbol">ext2</filesystem> </raid_device> <raid_device> <device_name>/dev/md1</device_name> <parity_algorithm>left-asymmetric</parity_algorithm> <persistent_superblock>true</persistent_superblock> <raid_level>1</raid_level> <chunk_size>4</chunk_size> <mount/> <format config:type="boolean">false</format> <lvm_group>switch</lvm_group> </raid_device> </raid> </install> </profile>
-- To unsubscribe, e-mail: suse-autoinstall-unsubscribe@suse.com For additional commands, e-mail: suse-autoinstall-help@suse.com -- Anas Nashif <nashif@suse.com>, SuSE Linux AG Montreal (Laval), Canada
participants (2)
-
Anas Nashif
-
Brian Strand