On Sat, Jan 10, 2009 at 7:32 AM, Rob OpenSuSE
2009/1/10 Larry Stotler
: On Sat, Jan 10, 2009 at 12:12 AM, Matt Sealey
wrote: of work but it's definitely possible to knock KDE4 down so it's basically featureless (exactly like you want it) with barely anything done on startup except get into X and start networking.
Which shows me it's just not a compelling alternative. The devs seems to have fallen into the "it's there, let's waste it" trap. They have
Actually Qt4 and KDE4 have not increased memory consumption particularly according to my observations.
There has been stuff added which is a matter of configuration. 11.1 just has a lot of issues right now, but testing pre-release on even 8 yr old hardware, I had acceptable performance for occasional desktop use.
Alot of ppl with much newer machines have complained of performance problems, so I think such a sweeping generalisation as the "it's there, let's waste it" trap comment, is as erroneous as the 1GiB RAM requirement.
It runs great in 256MB and 512MB. As well as you could expect any OS
that does that to run in them - which is to say, fairly usable. I
remember back in the days when Windows 2000 was a limited beta and it
would install on a system with 24MB of memory. It ran like crap.
Systems with 32MB of memory - now, you could run Office on those! The
limiting factor was NOT the slow processors (30-60MHz Pentiums) or the
lack of RAM, but that when it DID swap, the ancient IDE controllers
would effectively lock the machine up. If you put a pretty decent
Promise IDE card in there (we're talking ATA66, I still have that
card) the whole thing would just pop to life.
In later betas and the final release they bumped the requirement to
64MB to gain a sort of baseline performance expectation. That is not
to say that it would not run in 24MB anymore (at least it would just
flake out during install because of a requirements check but you could
just swap in 64MB to install, and then go back to a lower size..
easily possible if you're using Ghost or PartitionMagic to push
sysprepped images), but there are many, many things that go with a
system that only has 24MB which limit performance far more.
We can bring this into the future somewhat, and point out that the
only reason the Efika (400MHz G2 PowerPC, no L2 cache, 128MB DDR2)
runs it so badly is because it has no DMA-enabled ATA driver. The
processor is fast, the memory is fast, but lack of fast swap space
really lets it down.
I cannot imagine you would ever have a PC with even 256MB that could
not get by with KDE4 - I actually have KDE4 running on my Via EPIA
M1000 as of last week now, and at 1GHz and with 256MB RAM, it's fine,
fine, fine (my only disappointed moment was when I found the unichrome
driver sucks, as does the unichrome DRI which will not load, so I
could not spin a desktop cube around. But otherwise it was snappy.).
So, KDE4 has reduced memory requirements and Qt4 is far better
optimized and more efficient, so where is the problem? Well, I'd say,
it's avahi-daemon, postfix, powerd, beagle, pick any daemon which
starts up at boot, or before the desktop, which has doubled in number
from 10.3 to 11.1 - I can't nitpick at the accessibility daemons but
the amount of stuff loaded on boot has gotten way out of control.
Let's consider something like the FUSE filesystem. Now, this does not
take too long to load, or take up too much memory, but it is a good
example of the operation. boot.fuse brings this up at boot time, way
before the desktop and way, way after everything has been pulled from
fstab. This may be necessary to, for instance, load the users' Windows
drive and mount it for the desktop. But why not make it so that this
drive's filesystem is only detected, drivers loaded, filesystems
mounted, as and when the user ACTUALLY goes to access it?
When you click a USB stick, it mounts, VFS layers kick in, and FUSE
modules are loaded. Do you really need the kernel driver around for 5
days before a user does this? Can't the module be loaded on-demand,
kept around until memory pressure or something else hits it? The same
would be true of anything else.
Postfix is another example of something which is just too big for its
boots. How many people actually really configure their system so that
SMTP mail goes directly through this daemon? It's only there because
cron needs an SMTP daemon. A user with a Netbook would never care.
Debian etc. use ssmtp which is much smaller, fulfils the requirements
exactly, and does not contain a full SMTP/LMTP compliant mail solution
with filters and scripting and a full mail queue which gets started on
boot and only hangs around waiting for a cron job to fail. It takes
ages to start, and soaks up resources. If someone really does need
postfix, they can grab a postfix pattern, I mean.. why not? Or, what
if they like exim better? Postfix is hard to uninstall once it's there
for a novice who wants to get rid of it :)
I am sure there are plenty of other things which could be deferred as
services (is this even possible with init or any other boot process?)
until really needed, or simply cut out or replaced with
lower-resource-using systems.
The obvious trick is to use memory when you need it, load things when
they're about to be used.. even Windows manages to install a billion
services on a system, but Windows Installer is only started when
you're installing something, Windows Live Communications Platform is
only started when you start Windows Live apps, IMAPI CD burning
doesn't start unless I'm about to burn a CD. VirtualBox doesn't start
it's service until you're loading VirtualBox. And Contrast VMware
which manages to soak up 200MB of auth, nat, disk mounting and dhcp
relay daemons before you even start a VM.. and this is just from
VMware Player (which I have not run since I installed it)
--
Matt Sealey