On Mon, Jan 12, 2009 at 12:39 PM, Rob OpenSuSE
2009/1/12 Matt Sealey
: You actually appeared to agree, with "realtively slower" a few lines down later in your statement that included the word scaling; which usually applies to capacity and size comparisons not relative performance. Misrepresenting and misquoting is not helpful to a constructiive discussion, nor the sentiment you've shown.
I said that taking 1 cycle of a 30MHz system does not mean that 300 cycles of a 3GHz system is slower, just because it takes more cycles. Every memory access on the older system will take that amount of time, whereas accesses to L2 on a lot of systems may take 11-14 cycles (using a G4 as a reference, it went up from 11 to 12, then to 14 with ECC enabled on a 7448 vs. 7447A). Access of data already in L1 is practically free (i.e. less time than the instruction, which is usually 1 cycle for 95% of instructions on PowerPC) Looking at a CPU holistically you take into account the sum of it's parts, and the way they interact, and how this affects your code and performance. You are fixated on how many cycles it takes. You're also using it as an argument about using MORE memory. Both of these are wrong.
Again IMO the same functionality is not generally consuming more memory in newer software, in fact it is clear that many projects are avoiding that for good reasons, despite a large increase in number of 2-4GiB destkop boxes, they've taken steps to reduce memory footprint.
And how does some app using more memory for a task somehow reduce it's
performance because memory latencies and access times have increased
over the years?
Using 10MB or 100MB of memory for something makes no difference if
your L2 cache is 256kb - you can never fit all of it in, so it will go
to main memory at some point.. on a system where you have no L2 cache,
you will nearly always be looking at main memory. In these situations
with modern processors, the time taken to perform a miss, fetch the
new data, is still much less in real measurable time, than it was on
older processors with more limited architectures. Every time you swap
from application to application, large swathes of data will need to be
loaded in - and caches effectively flushed so as to make room for the
currently running task and not the previous one. Embedded processors
allow locking code into L1 or L2 for this purpose - for instance
something you need to be there all the time. Linux doesn't bother.
We're not really concerned here with how much time a CPU wastes
accessing main memory. It's pretty clear that no component running on
a desktop Linux is going to entirely fit in there, and in the end,
this makes it a pretty moot point.
What is to be concerned about is how a desktop system that had minimum
requirements of 256MB a couple years back now has requirements - even
with the purported improvements in memory usage for KDE4 for example -
which are upwards of 512MB. These are obviously not the fault of the
desktop itself, but perhaps the new technologies that came with it.
Search tools (like Beagle, which has some enormous resource usage
which may be down to Mono or may not. I know ATI Catalyst Control
Center doesn't do much more than the old ATI Control Panel did on
Windows, but it still takes up 60MB on boot here. That might be a
quarter of the memory in a system, and cause premature use of the
paging file, which shouldn't be too bad on most systems, but on others
(see previous mail about slow hard disks. Contrast speeds of
USB-connected disks or single-level NAND Flash) can cause real
problems. That is a lot of memory for a couple of sliders..)
It has nothing to do with memory access times but of the overall use
of memory. A Linux desktop should boot in 256MB (as the installer
won't install in less) and have a large amount left for applications -
currently it soaks up just about 210MB on my Pegasos and Via EPIA
after login to the GNOME desktop - no Compiz etc. enabled. I think
this is somewhat unacceptable. It's workable but, it could be less.
If you have 4GB or 8GB of memory it is nothing to care about, but
users put in this memory to run applications, not to provide space for
50 boot services which sit idle, most of which are only there to
provide people with large amounts of memory to get things done
quicker. This is not very friendly to those who run in more
constrained environments. I'm not looking for GNOME etc. to run in
128MB (although in 10.3 it did, and I had enough memory left to run
applications before swapping) but reducing the memory footprint of the
basic install would be awesome.
--
Matt Sealey