Mailinglist Archive: opensuse (1239 mails)

< Previous Next >
Re: [opensuse] Re: XFS and openSUSE 12.1
  • From: Roger Oberholtzer <roger@xxxxxx>
  • Date: Mon, 03 Jun 2013 23:46:41 +0200
  • Message-id: <1370296001.29626.5.camel@localhost>
On Mon, 2013-06-03 at 11:57 -0700, Linda Walsh wrote:
Roger Oberholtzer wrote:
I am using XFS on a 12.1 system. The system records jpeg data to large
files in real time. We have used XFS for this for a while since it has
as a listed feature that it is well suited to writing streaming media
data. We have used this for quite a while on openSUSE 11.2.

We have developed a new version of this system that collects more data.
What I have found is that the jpeg data is typically written at the
speed I expect. Every once in a while, the write takes 100x longer.
Instead of the expected 80 msecs or so to do the compress and write, it
takes, say, 4 or 5 seconds.

1) Have you tried using a XFS Real-Time segment. It was designed to
prevent this type of lag

I will have to explore this. I am not familiar with it.

2) How full is the disk & how fragmented is its free space?

Newly formatted. No fragmentation.

3) I get the impression that you are collecting more data for the same
records, such that each (or significantly many) records are growing beyond
the original space allocated for such records. So, while the system starts
by buffering while it looks for space to hold the additional information
(possibly at the end of the disk), when the buffers fill the OS kicks in
to free space, and forces a long wait while new space is sought for each
buffer it wants to empty -- forcing a length search for free blocks that
won't be near your present data, but most likely at the end of it.
Does that sound about right?

It is a binary file that grows and grows (up to 2 GB, which is the max
file size we allow). The file contains a stream of JPEG images. One
after another. Each image is 1920 x 450. There are 50 of these per
second at max speed. The system has no problem doing this. It can work
fine for 30 minutes. Then a single compress suddenly takes 4 or 5
seconds. If I write to /dev/null instead of a physical file, the
compress per image stays a constant 10 milliseconds. It is only when I
fopen/fwrite a real file on an XFS disk that this happens.

If #3 is true, you might get better long-term performance improvement
by restructuring your database by copying files to another partition,
and on the other partition, set the allocsize= in your fstab, on the
new partition to the size of the largest size your files will become.
This will spread out data when the allocator first allocates the files
so later updates won't require finding space that is far from the file.


Yours sincerely,

Roger Oberholtzer

OPQ Systems / Ramböll RST

Office: Int +46 10-615 60 20
Mobile: Int +46 70-815 1696

Ramböll Sverige AB
Krukmakargatan 21
P.O. Box 17009
SE-104 62 Stockholm, Sweden

To unsubscribe, e-mail: opensuse+unsubscribe@xxxxxxxxxxxx
To contact the owner, e-mail: opensuse+owner@xxxxxxxxxxxx

< Previous Next >
This Thread