Hi all, does anyone know of a way to make autoyast reread the partition table(s) after (or at the end of) a pre-script execution? I'm trying to remove the current partitions, if any, using "parted", which succeeds. However, I'm guessing that autoyast already has a pretty good idea of the current partition plan based on a partition table(s) read prior to my usage of "parted"? R/Lars S
On June 29, 2003 12:29 pm, Lars Stavholm wrote:
Hi all,
does anyone know of a way to make autoyast reread the partition table(s) after (or at the end of) a pre-script execution?
I'm trying to remove the current partitions, if any, using "parted", which succeeds. However, I'm guessing that autoyast already has a pretty good idea of the current partition plan based on a partition table(s) read prior to my usage of "parted"?
Why not use autoyast to remove the partitions? You can also initialize the partition table using partitioning/drive/initialize{boolean} Anas
R/Lars S
On Tue, 1 Jul 2003, Anas Nashif wrote:
On June 29, 2003 12:29 pm, Lars Stavholm wrote:
Hi all,
does anyone know of a way to make autoyast reread the partition table(s) after (or at the end of) a pre-script execution?
I'm trying to remove the current partitions, if any, using "parted", which succeeds. However, I'm guessing that autoyast already has a pretty good idea of the current partition plan based on a partition table(s) read prior to my usage of "parted"?
Why not use autoyast to remove the partitions? You can also initialize the partition table using partitioning/drive/initialize{boolean}
Well, I've tried it before, didn't work to well, got sidetracked, just tried it again, still doesn't work as expected. Unfortenatly we're using autoyast-installation 2.7.18-1 from the distro, I don't know wether we should update to 2.7.19-0 or not (advice?). Below you'll find the complete xml file we're using. Every second installation (on the same system) reports an error: LVM Error vgcreate -A n -s 4096k vg00 /dev/md0 vgcreate -- volume group "vg00" already exists The installation stops of course, I hit the reset button, and next time around it works perfectly fine. Why? The pre-script "wipe.sh" is an attempt to clean out lvm and raid info and then even the partition tables. BTW, are the dtd's really up to date. The below xml file will not pass an xmllint --valid --noout quietly. If I change the xml to fit the dtd'd it doesn't work (can't remeber the fault I got when testing it). My question is: should I trust the dtd's or not? R/Lars S --- <?xml version="1.0"?> <!DOCTYPE profile SYSTEM "/usr/share/autoinstall/dtd/profile.dtd"> <profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns"> <configure> <scripts> <pre-scripts config:type="list"> <script> <filename>wipe.sh</filename> <interpreter>shell</interpreter> <source> <![CDATA[ mds=`lsraid -A -p | grep md | cut -d\ -f8` for md in $mds; do mdadm -S $md done vgscan vgs=`echo /dev/vg*` for vg in $vgs; do if [ -d $vg ]; then vgchange -a y $vg lvs=`ls $vg` for lv in $lvs; do i=`basename $lv` if [ $lv != "group" ]; then lvremove -f $vg/$lv fi done vgchange -a n $vg vgremove $vg fi done hds=`fdisk -l | grep Disk | cut -d\ -f2 | cut -d: -f1` for hd in $hds; do ps=`parted $hd print | grep '^[1-9]' | cut -d\ -f1` for p in $ps; do mdadm --zero-superblock $hd$p parted $hd rm $p done done ]]> </source> </script> </pre-scripts> </scripts> </configure> <install> <partitioning config:type="list"> <drive> <device>/dev/hda</device> <initialize config:type="boolean">true</initialize> <partitions config:type="list"> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <mount>/boot</mount> <partition_id config:type="integer">131</partition_id> <partition_nr config:type="integer">1</partition_nr> <size>50MB</size> </partition> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <format config:type="boolean">false</format> <raid_name>/dev/md0</raid_name> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">2</partition_nr> <size>max</size> </partition> </partitions> <use>all</use> </drive> <drive> <device>/dev/hdc</device> <initialize config:type="boolean">true</initialize> <partitions config:type="list"> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <mount>/boot_backup</mount> <partition_id config:type="integer">131</partition_id> <partition_nr config:type="integer">1</partition_nr> <size>50MB</size> </partition> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <format config:type="boolean">false</format> <raid_name>/dev/md0</raid_name> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">2</partition_nr> <size>max</size> </partition> </partitions> <use>all</use> </drive> </partitioning> <raid config:type="list"> <raid_device> <device_name>/dev/md0</device_name> <parity_algorithm>left-asymmetric</parity_algorithm> <persistent_superblock config:type="boolean">true</persistent_superblock> <raid_type>raid1</raid_type> <chunk_size>32</chunk_size> <lvm_group>vg00</lvm_group> <mount /> <format config:type="boolean">false</format> </raid_device> </raid> <lvm config:type="list"> <lvm_group> <lvm_name>vg00</lvm_name> <pesize>4M</pesize> <logical_volumes config:type="list"> <lv> <lv_name>lvol1</lv_name> <lv_size>6GB</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/</lv_mount> </lv> <lv> <lv_name>lvol2</lv_name> <lv_size>256MB</lv_size> <lv_mount>swap</lv_mount> </lv> <lv> <lv_name>lvol3</lv_name> <lv_size>4GB</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/usr</lv_mount> </lv> </logical_volumes> </lvm_group> </lvm> <software> <base>Minimal</base> </software> </install> </profile> ---
Hi, The good news is that we are working on a new and improved LVM and RAID handling which will offer more flexibility configuring LVM and will remove all those problems people are reporting when installing LVM... Still, i have a workaround implemented which might remove some of the problems now. Adding install/lvm/lvm_group/destroy_old{boolean} true will remove all existing LVM instances. It is known to work in most cases.. Please try this method and see if it works for you. Anas On July 1, 2003 03:04 pm, Lars Stavholm wrote:
On Tue, 1 Jul 2003, Anas Nashif wrote:
On June 29, 2003 12:29 pm, Lars Stavholm wrote:
Hi all,
does anyone know of a way to make autoyast reread the partition table(s) after (or at the end of) a pre-script execution?
I'm trying to remove the current partitions, if any, using "parted", which succeeds. However, I'm guessing that autoyast already has a pretty good idea of the current partition plan based on a partition table(s) read prior to my usage of "parted"?
Why not use autoyast to remove the partitions? You can also initialize the partition table using partitioning/drive/initialize{boolean}
Well, I've tried it before, didn't work to well, got sidetracked, just tried it again, still doesn't work as expected.
Unfortenatly we're using autoyast-installation 2.7.18-1 from the distro, I don't know wether we should update to 2.7.19-0 or not (advice?).
Below you'll find the complete xml file we're using. Every second installation (on the same system) reports an error:
LVM Error vgcreate -A n -s 4096k vg00 /dev/md0 vgcreate -- volume group "vg00" already exists
The installation stops of course, I hit the reset button, and next time around it works perfectly fine. Why?
The pre-script "wipe.sh" is an attempt to clean out lvm and raid info and then even the partition tables.
BTW, are the dtd's really up to date. The below xml file will not pass an xmllint --valid --noout quietly. If I change the xml to fit the dtd'd it doesn't work (can't remeber the fault I got when testing it). My question is: should I trust the dtd's or not?
R/Lars S --- <?xml version="1.0"?> <!DOCTYPE profile SYSTEM "/usr/share/autoinstall/dtd/profile.dtd"> <profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns"> <configure> <scripts> <pre-scripts config:type="list"> <script> <filename>wipe.sh</filename> <interpreter>shell</interpreter> <source> <![CDATA[ mds=`lsraid -A -p | grep md | cut -d\ -f8` for md in $mds; do mdadm -S $md done
vgscan vgs=`echo /dev/vg*` for vg in $vgs; do if [ -d $vg ]; then vgchange -a y $vg lvs=`ls $vg` for lv in $lvs; do i=`basename $lv` if [ $lv != "group" ]; then lvremove -f $vg/$lv fi done vgchange -a n $vg vgremove $vg fi done
hds=`fdisk -l | grep Disk | cut -d\ -f2 | cut -d: -f1` for hd in $hds; do ps=`parted $hd print | grep '^[1-9]' | cut -d\ -f1` for p in $ps; do mdadm --zero-superblock $hd$p parted $hd rm $p done done ]]> </source> </script> </pre-scripts> </scripts> </configure> <install> <partitioning config:type="list"> <drive> <device>/dev/hda</device> <initialize config:type="boolean">true</initialize> <partitions config:type="list"> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <mount>/boot</mount> <partition_id config:type="integer">131</partition_id> <partition_nr config:type="integer">1</partition_nr> <size>50MB</size> </partition> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <format config:type="boolean">false</format> <raid_name>/dev/md0</raid_name> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">2</partition_nr> <size>max</size> </partition> </partitions> <use>all</use> </drive> <drive> <device>/dev/hdc</device> <initialize config:type="boolean">true</initialize> <partitions config:type="list"> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <mount>/boot_backup</mount> <partition_id config:type="integer">131</partition_id> <partition_nr config:type="integer">1</partition_nr> <size>50MB</size> </partition> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <format config:type="boolean">false</format> <raid_name>/dev/md0</raid_name> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">2</partition_nr> <size>max</size> </partition> </partitions> <use>all</use> </drive> </partitioning> <raid config:type="list"> <raid_device> <device_name>/dev/md0</device_name> <parity_algorithm>left-asymmetric</parity_algorithm> <persistent_superblock config:type="boolean">true</persistent_superblock> <raid_type>raid1</raid_type> <chunk_size>32</chunk_size> <lvm_group>vg00</lvm_group> <mount /> <format config:type="boolean">false</format> </raid_device> </raid> <lvm config:type="list"> <lvm_group> <lvm_name>vg00</lvm_name> <pesize>4M</pesize> <logical_volumes config:type="list"> <lv> <lv_name>lvol1</lv_name> <lv_size>6GB</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/</lv_mount> </lv> <lv> <lv_name>lvol2</lv_name> <lv_size>256MB</lv_size> <lv_mount>swap</lv_mount> </lv> <lv> <lv_name>lvol3</lv_name> <lv_size>4GB</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/usr</lv_mount> </lv> </logical_volumes> </lvm_group> </lvm> <software> <base>Minimal</base> </software> </install> </profile> ---
On Tue, 1 Jul 2003, Anas Nashif wrote:
Hi,
The good news is that we are working on a new and improved LVM and RAID handling which will offer more flexibility configuring LVM and will remove all those problems people are reporting when installing LVM...
Still, i have a workaround implemented which might remove some of the problems now. Adding
install/lvm/lvm_group/destroy_old{boolean} true
will remove all existing LVM instances. It is known to work in most cases..
Please try this method and see if it works for you.
No effect at all. Should we update to autoyast2 2.7.19-0? Are the dtd's of any real use, i.e. should i trust the xmllint --valid --noout error messages? /Lars
Anas
On July 1, 2003 03:04 pm, Lars Stavholm wrote:
On Tue, 1 Jul 2003, Anas Nashif wrote:
On June 29, 2003 12:29 pm, Lars Stavholm wrote:
Hi all,
does anyone know of a way to make autoyast reread the partition table(s) after (or at the end of) a pre-script execution?
I'm trying to remove the current partitions, if any, using "parted", which succeeds. However, I'm guessing that autoyast already has a pretty good idea of the current partition plan based on a partition table(s) read prior to my usage of "parted"?
Why not use autoyast to remove the partitions? You can also initialize the partition table using partitioning/drive/initialize{boolean}
Well, I've tried it before, didn't work to well, got sidetracked, just tried it again, still doesn't work as expected.
Unfortenatly we're using autoyast-installation 2.7.18-1 from the distro, I don't know wether we should update to 2.7.19-0 or not (advice?).
Below you'll find the complete xml file we're using. Every second installation (on the same system) reports an error:
LVM Error vgcreate -A n -s 4096k vg00 /dev/md0 vgcreate -- volume group "vg00" already exists
The installation stops of course, I hit the reset button, and next time around it works perfectly fine. Why?
The pre-script "wipe.sh" is an attempt to clean out lvm and raid info and then even the partition tables.
BTW, are the dtd's really up to date. The below xml file will not pass an xmllint --valid --noout quietly. If I change the xml to fit the dtd'd it doesn't work (can't remeber the fault I got when testing it). My question is: should I trust the dtd's or not?
R/Lars S --- <?xml version="1.0"?> <!DOCTYPE profile SYSTEM "/usr/share/autoinstall/dtd/profile.dtd"> <profile xmlns="http://www.suse.com/1.0/yast2ns" xmlns:config="http://www.suse.com/1.0/configns"> <configure> <scripts> <pre-scripts config:type="list"> <script> <filename>wipe.sh</filename> <interpreter>shell</interpreter> <source> <![CDATA[ mds=`lsraid -A -p | grep md | cut -d\ -f8` for md in $mds; do mdadm -S $md done
vgscan vgs=`echo /dev/vg*` for vg in $vgs; do if [ -d $vg ]; then vgchange -a y $vg lvs=`ls $vg` for lv in $lvs; do i=`basename $lv` if [ $lv != "group" ]; then lvremove -f $vg/$lv fi done vgchange -a n $vg vgremove $vg fi done
hds=`fdisk -l | grep Disk | cut -d\ -f2 | cut -d: -f1` for hd in $hds; do ps=`parted $hd print | grep '^[1-9]' | cut -d\ -f1` for p in $ps; do mdadm --zero-superblock $hd$p parted $hd rm $p done done ]]> </source> </script> </pre-scripts> </scripts> </configure> <install> <partitioning config:type="list"> <drive> <device>/dev/hda</device> <initialize config:type="boolean">true</initialize> <partitions config:type="list"> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <mount>/boot</mount> <partition_id config:type="integer">131</partition_id> <partition_nr config:type="integer">1</partition_nr> <size>50MB</size> </partition> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <format config:type="boolean">false</format> <raid_name>/dev/md0</raid_name> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">2</partition_nr> <size>max</size> </partition> </partitions> <use>all</use> </drive> <drive> <device>/dev/hdc</device> <initialize config:type="boolean">true</initialize> <partitions config:type="list"> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <filesystem config:type="symbol">ext3</filesystem> <format config:type="boolean">true</format> <mount>/boot_backup</mount> <partition_id config:type="integer">131</partition_id> <partition_nr config:type="integer">1</partition_nr> <size>50MB</size> </partition> <partition> <crypt_fs config:type="boolean">false</crypt_fs> <crypt_key></crypt_key> <format config:type="boolean">false</format> <raid_name>/dev/md0</raid_name> <partition_id config:type="integer">253</partition_id> <partition_nr config:type="integer">2</partition_nr> <size>max</size> </partition> </partitions> <use>all</use> </drive> </partitioning> <raid config:type="list"> <raid_device> <device_name>/dev/md0</device_name> <parity_algorithm>left-asymmetric</parity_algorithm> <persistent_superblock config:type="boolean">true</persistent_superblock> <raid_type>raid1</raid_type> <chunk_size>32</chunk_size> <lvm_group>vg00</lvm_group> <mount /> <format config:type="boolean">false</format> </raid_device> </raid> <lvm config:type="list"> <lvm_group> <lvm_name>vg00</lvm_name> <pesize>4M</pesize> <logical_volumes config:type="list"> <lv> <lv_name>lvol1</lv_name> <lv_size>6GB</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/</lv_mount> </lv> <lv> <lv_name>lvol2</lv_name> <lv_size>256MB</lv_size> <lv_mount>swap</lv_mount> </lv> <lv> <lv_name>lvol3</lv_name> <lv_size>4GB</lv_size> <lv_fs>reiser</lv_fs> <lv_mount>/usr</lv_mount> </lv> </logical_volumes> </lvm_group> </lvm> <software> <base>Minimal</base> </software> </install> </profile> ---
participants (2)
-
Anas Nashif
-
Lars Stavholm