Mailinglist Archive: opensuse-virtual (68 mails)

< Previous Next >
Re: [opensuse-virtual] is there a clever way to shrink qcow2 images
  • From: Rob Verduijn <rob.verduijn@xxxxxxxxx>
  • Date: Tue, 20 Aug 2013 20:28:10 +0200
  • Message-id: <CAMkGkc5cVHe_YcWvFzq8Hb_auB0MrpSQqrmhWF7CP3vB=47hmA@mail.gmail.com>
True,

Bigger blocks are faster written on a disc, and if the disk is not to
heavily fragmented this will save time.
However you have to remember that the disk is a virtual disk image on
another disc. Fragmentation on the host disk also matters.
If the virtual disk image itself is fragmented on the host disk, then
your speed gain is lost again with bigger block sizes.

I'm not sure if experimenting for the optimal size is worth the time and effort.

Maybe using an automatically defragmenting filesystem like btrfs for
the host to store the images.
However I have no idea what the impact is on an automatically
defragmenting filesystem has on the sparse properties of the qcow2
image.

Rob

2013/8/20 Tony Su <tonysu@xxxxxxxxxxxxxxxxx>:
But as I noted you can write zeroes in blocks bigger than 1MB and as you
make your blocks bigger that last unwritten block will also get bigger and
possibly contain data which isn't compressable.

Tony

On Tuesday, August 20, 2013, Rob Verduijn <rob.verduijn@xxxxxxxxx> wrote:
Hi,

Indeed, you fill up the last blocks that are not big enough for my
defined blocksize.
But those blocks are very likely to be empty, also the loss of max
1024k - 1 of datastorage isn't really bothering me.

The requirement to fill out the empty space with zeroes and then take
down the vm to create a new qcow2 image is what bothers me.
I wish to reclaim the empty space from a qcow2 image in a more
efficient way, preferably without creating a new qcow2 image.
(no downtime would be a nice bonus)

Rob

2013/8/20 Tony Su <tonysu@xxxxxxxxxxxxxxxxx>:
Haven't tried the following to zero a virtual disk, but the following
should
be fast, make the bs as big as you want (maybe 10x for very large disks).
Of
course, the bigger you make zero.big.file the longer it <might> take to
create zero.file. My code zeroes all bytes in 2 steps whereas yours
doesn't
zero all.

dd if=/dev/zero of=zero.big.file bs=1024 count=102400
cat /dev/zero > zero.file
rm zero.big.file
rm zero.file

Tony

On Aug 20, 2013 1:39 AM, "Rob Verduijn" <rob.verduijn@xxxxxxxxx> wrote:

Hi,

Using separate images for partitions makes the script a tiny bit less
complex.

I've already been scripting the exercise, the use off multiple
partition images makes it only a tiny bit more complex.
qemu-nbd simply adds another device for each additional partition.

#!/binbash
modprobe nbd max_part=16 #number off partitions
qemu-nbd -c --nocache --aio=native /path/to/image.qcow2
mount /dev/nbd0 /mnt # I didn't partiotion the image, just
formated the image as ext4
dd if=/dev/zero of=/mnt/bigfile bs=1024k; sync; rm /mnt/bigfile; rm
bigfile; sync; rm /mnt/bigfile; sync
sleep 2
umount /dev/nbd0 # umount the device
qemu-nbd /dev/nbd0 # clean up nbd devices or they will bite you
qemu-img convert -c -O qcow2 /path/to/image.qcow2 /path/to/shrunk.qcow2
mv /path/to/image.qcow2 /path/to/image.qcow2.bak
mv /path/to/shrunk.qcow2 /path/to/image.qcow2

The performance of qemu-nbd is rather poor, using writeback cache is
not really an option since you always need to wait for the zeroes to
be actually written to the hd.
Also writeback is very hazardous if you use a script to umount and
disconnect the nbd device, image corruption is very likely to happen
since sync doesn't apply to nbd0 devices and blockdev --flushbufs
/dev/nbd0 isn't foolproof either when scripting.
(ctrl-c the script at the wrong time and you are in for a recovery)

So the script isn't that difficult, neither is the use of multiple
partitions in an image.
The biggest drawback is, downtime, filling the image with zeroes takes
even longer offline than online.
Ok you could have a cronjob in the vm that does that nightly, but I
can imagine issues when the hd of a vm is at filled to the max at
regular intervals (once a month ??), even at expected times and only
for a short moment.
Also that would mean the shrinking cronjob is required on the host (I
want the shrinking done as soon as possible after the zeroing)
This has to be timed properly with the zero cronjob of the guest, this
becomes rather complex with every additional guest.

Regards
Rob


2013/8/19 Tony Su <tonysu@xxxxxxxxxxxxxxxxx>:
Not specific to qcow
In the past if I wanted to partition files I'd deploy multiple disks
instead
of partitions. I don't knowi if there a significant overhead diff but
I
found performance did not suffer. Once on separate disks, should be
trivial
to script the procedure.

You can execute your conversion on any storage, just as fast as
possible.
So, for instance can even be cheap temporary attached storage.

And AFAIK has to be done offline although I suppose a fancy live
migraation
could be implemented so you're not offline.
Tony

On Aug 19, 2013 11:50 AM, "Rob Verduijn" <rob.verduijn@xxxxxxxxx>
wrote:

Hello all,

I'm looking for a clever way to shrink qcow2 images.

what I do now is :

1 in the vm delete the files I don't need (tempfiles
--
To unsubscribe, e-mail: opensuse-virtual+unsubscribe@xxxxxxxxxxxx
To contact the owner, e-mail: opensuse-virtual+owner@xxxxxxxxxxxx

< Previous Next >
This Thread