Looking for consensus, I can marginally agree with your third reason, not with the first two. 1) I know that static compiled programs are bigger but they do not fill a hard disk. In order to have more data about this, I tried to compile a few programs with -static (usually I go with the classical configure - make scheme). I did it with three programs I currently use: mc (Midnight Commander), mutt and lyx. While all three compile here without errors dynamically, with -static I got errors in all three. Only mc ignored them and produced an executable, the other two failed. In all cases the errors were related to functions in libintl. For mc the comparison is as follows: dynamic size 675192, static size 919995. This is no big deal, isn't it, no risk of exploding my hard disk. 2) As for the second reason, that's indeed the problem which arises most of the time and I see it in fact as an argument against dynamic linking. See what happens in the above tests, two out of three applications don't compile statically. The error criteria are obviously relaxed for dynamic linking, guess why ? So, maybe there are "other reasons" (no doubt about that since dynamic linking has become the rule : there must be reasons), the ones that you give do not suffice to me, sorry. However, your explanation clearly outlines the problem and clearly shows how difficult it is to live in an open source community with the present approach. The approach you advocate is the Microsoft approach. With due consideration to the fact that companies like SuSE, Redhat, earn their lives on it, I will not say it's bad - simply I don't like... On Tue, Jun 27, 2000 at 06:42:11AM +0200, Philipp Thomas wrote:
* Andrei Mircea (mircea.andrei@wanadoo.fr) [20000627 01:59]:
Given the huge RAM memory available in present computers, I wonder if it would not be more reasonable to go back to the old system of statically linked programs. Dynamic linking sure is a bright idea, but...what do other people on this list think ?
Thats a very bad idea, for a number of reasons.
- The needed disk space would explode. To see the difference, just compile your favourite app with -static added to the compiler options. - Updating the library would force recompiling/updating *all* applications. - Not every machine has huge amounts of RAM.
There are others but this should suffice.
The reason for the problems with shared libraries is, that most developers aren't cautious enough. What you need for such work is a defined environment for building binary packages. That's why our automatic package building system uses a chroot environment in which all necessary packages for a given distribution are installed. That way you get a defined set of tools and libraries that match the distribution for which a package is built.
Philipp
-- Philipp Thomas <pthomas@suse.de> Development, SuSE GmbH, Schanzaecker Str. 10, D-90443 Nuremberg, Germany
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/Support/Doku/FAQ/