Per Jessen said the following on 06/13/2013 02:42 AM:
Roger Oberholtzer wrote:
On Wed, 2013-06-12 at 18:51 -0400, Anton Aylward wrote:
Roger Oberholtzer said the following on 06/12/2013 10:00 AM:
Of course the cache doesn't go away when he uses sync! Why should it? Its a cache not a buffer. Instead, it grows and grows...
As others have said, the cache will grow to use all available memory
That is a GOOD THING for a cache to to.
Even when that messes up running app requests for memory? It has been said that if memory is needed by some app, this cache may get smaller to accommodate. I am not sure that is the case. When memory use gets big, some apps die.
Roger, if that were true, we'd all be in big trouble. The use of spare memory for file systems cache is purely opportunistic - if an app needs memory, it will get it. Provided there is free memory, of course - but "free memory" includes memory used for cacheing.
+1 Simple enough to test ... That they system runs at all for a long time running many scripts, browsing and saving, lots of stuff going though /tmp, lots of browser tabs opening and closing process space growing and shrinking, lots of .. well lots off ... What differentiated the UNIX model from "what went before" was that process creation was lightweight. That's what made shell programming so successful! The old mainframe processes like CICS were all long lived and needed constant 'tuning'. The UNIX processes spawned by the shell didn't live long enough to be worth tuning or were spending most of their time sleeping for one reason or another. It wasn't until we started doing things 'the mainframe way' such as running Oracle[1] that we had long lived processes. Things got worse from there. One large military project I worked on back in the Cold War era was based around a VAX cluster. The guidelines required that each application locked itself in core in order to get "adequate performance". I saw this a failure to understand how scheduling and memory use worked. I wrote my app to be very small and modular, so much so that the scheduler never thought to 'swap' it out when there were more productive ways to free up memory. Analysis is always useful and matching the strategy to the 'business needs' is so important that it cannot be over emphasised. Linux, like Windows, doesn't need careful tuning in the common case of a "office desktop', but there are more and more specialised situation and they do need tuning. Unlike CICS they can - usually - be tuned to once, or until the application base changes. As I keep pointing out, Rogers has what amounts to a write-only situation, and its running on a huge brute of a machine. This is nothing like "default for desktop/gui/office" out of the box settings. [1] To be fair ... IBM started with DB2 as the same monolithic model it had on the mainframes when it put DB2 up on AIX. Eventually the figured out the 'right way' was to do it the same way other native UNIX DBs worked, such as Progress, and have a number of small cooperating processes and few more that get spawned and die. The result was more responsive and shifted the 'tuning' from the process to the database and disk and dealing with matters of IO and caching. -- How long did the whining go on when KDE2 went on KDE3? The only universal constant is change. If a species can not adapt it goes extinct. That's the law of the universe, adapt or die. -- Billie Walsh, May 18 2013 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org