Mailinglist Archive: opensuse (1239 mails)

< Previous Next >
Re: [opensuse] Re: XFS and openSUSE 12.1
  • From: Roger Oberholtzer <roger@xxxxxx>
  • Date: Tue, 04 Jun 2013 09:05:34 +0200
  • Message-id: <1370329534.9026.25.camel@acme.pacific>
On Mon, 2013-06-03 at 17:13 -0700, Linda Walsh wrote:
Roger Oberholtzer wrote:
On Mon, 2013-06-03 at 11:57 -0700, Linda Walsh wrote:
Roger Oberholtzer wrote:
I am using XFS on a 12.1 system. The system records jpeg data to large
files in real time. We have used XFS for this for a while since it has
as a listed feature that it is well suited to writing streaming media
data. We have used this for quite a while on openSUSE 11.2.

We have developed a new version of this system that collects more data.
What I have found is that the jpeg data is typically written at the
speed I expect. Every once in a while, the write takes 100x longer.
Instead of the expected 80 msecs or so to do the compress and write, it
takes, say, 4 or 5 seconds.
----

1) Have you tried using a XFS Real-Time segment. It was designed to
prevent this type of lag

I will have to explore this. I am not familiar with it.
----
It's basically a way to get you guaranteed I/O speeds, but
I think it sacrifices some flexibility -- like maybe requiring pre-allocation
of files (pure guess what the requirements are, as I haven't used it either).



It is a binary file that grows and grows (up to 2 GB, which is the max
file size we allow). The file contains a stream of JPEG images. One
after another. Each image is 1920 x 450. There are 50 of these per
second at max speed.
----
What I'm not clear on is your earlier statement that you increased
the size per image and now are re-writing them? Is there a 'rewrite'
involved,
or are you simply dumping data to disk as fast as you can?

Bad description by me. We have been using XFS for this type of
application for years. Recently, we changed our cameras to ones with
higher resolution. So we are now writing at a higher data rate. It is
only a single file being written to. We do not delete files. We would
expect fragmentation to be minimal because of these two reasons.

If it is the latter -- pre-allocate your space, and you will save
yourself tons of perf issues. "xfs_alloc_file" (or its equivalent calls).

If you have a 2ndary process allocate one of these when the old
one gets to 75% full, you shouldn't notice any hiccups.

Second thing -- someone else mentioned it -- it sounds like (this is
true if you are writing or rewriting, so independent variable), is to
do writes with O_DIRECT and do your own buffering to buffer to at least 1M,
better
16M boundaries. If you use O_DIRECT you will want to be page & sector (I
think
the kernel
changed, and you now you HAVE to be) aligned or you will get an error
indication.

You will get about a 30% or greater increase in write throughput.
This is assuming your app doesn't immediately turn around and need to read the
data again, in which case, you'd be penalized by not using the buffer cache.

We never read the data when writing it. The only thing we do is track
the file size via ftell. This is because none of the jpeg libraries tell
how much they have written.

Do you watch your free memory? I have an "xosview" window open with
LOAD/CPU/MEM/DISK (and an outside net)... but I can see used memory or cache
memory becoming tight. Attached is a sample of what you can see.. I did a
kernel build (make -j) so you could see how it emptied out the cache, for
example.

We have at least 10 GB free memory. And typically 10 idle CPUs.


The system has no problem doing this. It can work
fine for 30 minutes. Then a single compress suddenly takes 4 or 5
seconds.
---
4-5 seconds after 30 minutes?... Geez, even I have to catch my
breath now and then!

But you are not a computer...

If I write to /dev/null instead of a physical file, the
compress per image stays a constant 10 milliseconds. It is only when I
fopen/fwrite a real file on an XFS disk that this happens.

If #3 is true, you might get better long-term performance improvement
by restructuring your database by copying files to another partition,
and on the other partition, set the allocsize= in your fstab, on the
new partition to the size of the largest size your files will become.
This will spread out data when the allocator first allocates the files
so later updates won't require finding space that is far from the file.

I have seen suggestions of using allocsize=64m when writing streaming
media like I have. I will be trying this.


Yours sincerely,

Roger Oberholtzer

Ramböll RST / Systems

Office: Int +46 10-615 60 20
Mobile: Int +46 70-815 1696
roger.oberholtzer@xxxxxxxxxx
________________________________________

Ramböll Sverige AB
Krukmakargatan 21
P.O. Box 17009
SE-104 62 Stockholm, Sweden
www.rambollrst.se


--
To unsubscribe, e-mail: opensuse+unsubscribe@xxxxxxxxxxxx
To contact the owner, e-mail: opensuse+owner@xxxxxxxxxxxx

< Previous Next >
This Thread
Follow Ups