On Fri, Jan 9, 2009 at 8:37 PM, Rob OpenSuSE
2009/1/9 Matt Sealey
: On Fri, Jan 9, 2009 at 4:30 AM, Rob OpenSuSE
wrote: I'm not sure I agree with you here. Memory is cheap and fast, but CPU cycles have gotten shorter. 11 cycles on a QDR 800MHz bus goes much faster than 2 cycles on a 33Mhz bus, if it was even that. Even 140 cycles to main memory is faster. And once you get over the latency, the data is burst in and cached for longer.
On old systems, bloat was causing a few megabytes of extra memory access, now it can be 100-400MB. And it's no 11 cycles, but CPUs wait 100's on cache misses, never mind if there's a page fault and disk access involved.
In relative terms, memory has become slower, so even on systems which never have memory pressure, you don't want your desktop programs to all presume they can use major chunks of physical memory, as if they had system all to themselves.
You're talking a lot of crap, frankly.
You can't possibly think that memory access latency has increased
compared to the processors you used to use. In relative terms, what is
that supposed to mean? Try measuring memory access latency against
clock speed in relative terms. A 1MHz 6502 taking 1 or 2 cycles to
access main memory is not faster relatively than a 1.8GHz Core 2 Duo
taking 14 or 15 cycles to access L2 cache, and still not faster than
taking ~150 to access main memory on the event of a cache miss. And
the result of a cache miss and the overhead involved outside of the
actual access latency is NOT something you can criticize. It gives
far, far more benefits than having no cache at all.
Yes, I agree with you about bloat, but while memory access hasn't
scaled with processor speed, it is certainly not "relative slower"
than it was 10 years ago.
--
Matt Sealey