Roger Oberholtzer wrote:
On Wed, 2013-06-12 at 08:24 +0200, Per Jessen wrote:
It only grows as long as nothing else needs the memory. Using up otherwise unused memory as file systems cache seems quite prudent.
Would that this were the case. The memory use increases until process start to be killed. Which is the standard way Linux deals with memory shortages.
Right, but the memory used for filesystems cacheing is still available for processes to use.
And it begs the question, why should any file system think it is ok to cache, say 16GB? At least I would not expect this as the default.
In principle it seems okay I would say, but if the size causes a problem, there ought to be way of limiting it.
If this is a systemic problem, I ought to be able to reproduce it, so yesterday I wrote a little test program to do exactly that. I never saw your 4-5 second delay, but I did see the IO-rate dropping to about half a couple of times. Not regularly though. Much smaller system, single core, 1.5GB RAM, one single harddrive.
In fact my single process has two files open, each on a separate disk. Maybe that is part of the dynamic.
Certainly possible. I might try that too.
If you let your test app run, and the file it creates grow and grow, how does the cache usage progress?
It stayed at about 1G.
The test app needs to write to open a new file when the previous one is than the file system file size limit. And just keep doing this.
Yes, that what my test does: open file#0 write 2048x1M blocks close file#0 open file#1 write 2048x1M blocks close file#1 etc. I presume your test with two files would look like this: open file#0 open file#1 do 2048 times write 1M block to file#0 write 1M block to file#1 done close file#0 close file#1 etc.
And the cache usage will grow and grow...
But my testbox is quite limited in memory, so it doesn't.
I understand that the cache is there so I can possibly read data that has been recently written. However, I do not see how the kernel can just grow this cache until my memory is gone.
If something else needs the memory, the kernel will invalidate the cache and give the memory away.
I suspect the memory has been invalidated. But it has not been given away. I base that assumption on the fact that my workaround frees the cache immediately.
If no other process needs it, there's no reason to give it away.
Whether the cache size stops at some reasonable point or not is perhaps a side issue. The question remains: why, as the cache grows, do write calls periodically have longer and longer delays (in the magnitude of seconds)? If the cache is not causing this, then why does freeing it with the workaround result in these delays not happening?
Good question. There seems to be a direct link between the cache size and the delay. That's why I suggested you write a little program to allocate most of the memory such that the cache is kept small. #include <stdio.h> #include <string.h> #include <unistd.h> #include <stdlib.h> int main( int argc, char **argv ) { int a; char *m; long sz; sz=atol(argv[1]); m=malloc(sz*1024*1024); a=1; while( a++ ) { memset( m, a, sz*1024*1024 ); sleep(5); } } -- Per Jessen, Zürich (21.4°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org