Anton Aylward wrote:
I can understand a generic build for a distribution, yes. But ... Lets face it:
... Better than 99% of Linux users aren't "doing" the specialized kernel builds that Linda does.
... Better than 95% of Linux users aren't as technically sophisticated as the regular contributors here, not do that want to be. The just want their applications to work. Think "Android".
When I get a new kernel from Kernel_Stable I still have ...
... To run a mkinitrd or equivalent. And OUCH, that hits all four cores and for a few seconds I'm not getting much work done!
Well! At this point we get back to a situation like Linda describes. I've downloaded ALL the modules, even those for hardware I can't imagine using (or affording!). This is not the debugging kernel, so like most of the application binaries, it is 'stripped". (See 'man 1 strip') So why not 'strip' out the modules that aren't needed/wanted? Or make them a load on demand?
Because the typical kernel is still only 45-50Mb to download, a bit more to install. I do appreciate that there is probably still someone in Australian outback or in Belgian Congo on 2400baud dial-up, but those 50Mb are a mere drop in the ocean.
We are already doing a load-on-demand for kernel modules. If you run a ATI or a Nvida GPU then you have to load the driver. Is it too much to ask to extend the /etc/modules/d/ tree to 'strip' or ever load "blacklisted" modules?
No, it's not too much, but what would you gain? What's the business-case and how does the installer pick the right sets of modules? -- Per Jessen, Zürich (18.1°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org