Mailinglist Archive: opensuse (1239 mails)

< Previous Next >
Re: [opensuse] Re: XFS and openSUSE 12.1
  • From: Roger Oberholtzer <roger@xxxxxx>
  • Date: Wed, 12 Jun 2013 12:21:14 +0200
  • Message-id: <1371032474.16366.1.camel@acme.pacific>
On Wed, 2013-06-12 at 11:24 +0200, Carlos E. R. wrote:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 2013-06-12 09:37, Roger Oberholtzer wrote:
On Wed, 2013-06-12 at 08:24 +0200, Per Jessen wrote:

It only grows as long as nothing else needs the memory. Using up
otherwise unused memory as file systems cache seems quite
prudent.

Would that this were the case. The memory use increases until
process start to be killed. Which is the standard way Linux deals
with memory shortages.

No, never. I have never seen that.

What I have seen is, that if a process requests more for itself, and
there is not enough memory, it is taken from the system cache, which
reduces size till almost nil. Then, as the process demands more
memory, some processes get killed because there is no free and no
cache memory to take from. Not the other way round.

This could be why the app goes away. But the lack of memory that causes
it is this damnable page cache for the file system...

Whether the cache size stops at some reasonable point or not is
perhaps a side issue. The question remains: why, as the cache
grows, do write calls periodically have longer and longer delays
(in the magnitude of seconds)? If the cache is not causing this,
then why does freeing it with the workaround result in these delays
not happening?

This may be related or coincidental.

But predictable and repeatable.


-----END PGP SIGNATURE-----

Yours sincerely,

Roger Oberholtzer

Ramböll RST / Systems

Office: Int +46 10-615 60 20
Mobile: Int +46 70-815 1696
roger.oberholtzer@xxxxxxxxxx
________________________________________

Ramböll Sverige AB
Krukmakargatan 21
P.O. Box 17009
SE-104 62 Stockholm, Sweden
www.rambollrst.se


--
To unsubscribe, e-mail: opensuse+unsubscribe@xxxxxxxxxxxx
To contact the owner, e-mail: opensuse+owner@xxxxxxxxxxxx

< Previous Next >
This Thread
Follow Ups