Carlos E. R. wrote:
That "rule of thumb" was invented for windows, it makes no sense in Linux. There is no "rule", simply use as much or as little as you need, regardless on how much RAM there is available. It depends on the programs that will be run and their memory requirement.
Ah. I was wondering about that, since I've read both truisms. For my own part, I've never been able to understand the rationale for the 2X truism, so I've done my own thing, which is 768MB RAM, 2GB swap partition. My rationale is that I like to have several large programs available without having to wait for them to load and initialize every time I want to change from one to the other. 2GB swap lets me have them all at once, with occasional page faults rather than out-of-memory errors. Of course, switching to a recently unused program generates lots of page faults, but the initialization is already there in the swap. Startup time is almost unnoticeable. As I recall, that would have been a mistake until several years ago when the Linux memory manager was fixed so that large swap spaces no longer slowed the entire system down drastically.
And no, I do not accept naming HD memory as RAM memory, for one thing: it is not directly addressable by the processor, thus it is not even "memory". Reading from HD requires a program, as it resides in a peripheral device. I call it "long term external storage space" ;-)
I agree. Furthermore, it's not even "random" in the sense that internal memory is: you can't get to a location quickly. You have to move the head, then wait for the sector to come under the head and be recognized, then read in at least a sector, then copy the data to a convenient portion of internal memory, then ... John Perry