Sam Clemens wrote:
Linda Walsh wrote:
Sam Clemens wrote:
In your original post, you, perhaps inadvertently, implied that the machine couldn't be taken offline to do a fresh install.
Well --... it "can't" because it would cause me too much grief. I *would* like to get to having a redundant, failover system, but it hasn't been a priority.
Wholesale replacement of basic system libraries will end in utter chaos, as the install system nees the new libraries, and the older software needs the old libraries.
---- This is not a black & white issue. It's not "work this way or not work at all".
Linda, do yourself a favor, and STOP BEATING YOUR HEAD AGAINST THIS BRICK WALL because it's *not* going to crumble.
What part of "all ready done it; it works" do you not understand? I don't know how many times I've said it -- and others in this thread have done it going between different dist-vendors. It's been done, I've done it. I'll do it again. Get beyond "can't be done". It's already happened. The suse 9.3 server I've been talking about is already mostly converted... This is from the current running system -- its not a simulation -- try it on your system and see what comes up:
rpm -qa --queryformat="%{distribution}\n"|sort|uniq -c 11 (none) 1 SuSE Linux 9.1 (i586) 88 SuSE Linux 9.3 (i586) 1 SuSE Linux 9.3 (i686) 9 openSUSE 10.2 (i586) 670 openSUSE 10.3 (i586) 2 openSUSE 10.3 (i686)
(uptime 19 days at this point)... How can you claim this can't be done or that it doesn't work? This isn't a hypothetical and it's not the first time.
You cannot replace selected parts of an engine while the entire engine is running. This goes for tightly bound computer systems (in this case, ANYTHING which uses glibc) as much as physical objects.
Sorry for bothering you. I really did say my message was addressed to others who did things the way I did. I didn't ask for a debate on whether or not it could be done. Whether or not it can be done is "irrelevant". You live in your world where it can't be done, and I live in my fantasy world where things work. Can't you just allow me and people like me the fantasy that our systems are actually functional or that the words I am typing are being transmitted to you through my fantasy system. It's ok that it doesn't work, because in my fantasy, it works for me. FWIW, on linux, read-only libraries that have the 'right' alignment can be paged into memory directly w/o using buffers or a paging file. This is the preferred storage for commonly used libraries. A file may have multiple names, but ultimately, file information on disk is stored in groupings. Ignoring extended attributes, the information for one file is stored under "datapoint" or "node" on disk. For short, they are often called i-nodes or inodes. Inodes are the "handle" the operating system uses to reference information on a disk. Not only can file-containers (of their information) be referenced by multiple names on the file system (one can create a duplicate name using the "ln" command (without the "-s" flag) to create another "link" from a 2nd name to the same 'inode' referenced by a first name. This type of link, BTW, is called a hard-link -- (vs. links created with ln's "-s" flag" which are termed symbolic or soft links). After you've created a 2nd or third name to an inode, you can delete the first name and the information still remains on disk accessible through the 2nd name. As long as there is one reference to to the file's information 'node', the OS keeps the space reserved. Now if you delete all the file system's name to an inode, the file system can release the space. However, there is another way an inode on disk can remain reserved: by the OS itself. As long as the OS holds a link to a file on disk, the information (or data) in the file remains untouched. When files needing libraries are loaded into memory, the memory system keeps track of what is loaded into a demand-paged by associating each in-memory buffer with a file-system inode and offset. This means that under normal circumstances, if you delete a file on disk, and then create a new file with the same name, the memory subsystem still reserves the physical space of the old file on disk and is still able to page off new information out of that file into memory for programs that need it. Additionally, when a program "forks" a child, the child gets a copy of the parent's address space -- this includes block in memory that are mapped to a demand paged library on disk. So those programs will also also be able to demand-page in any library call they wish, from the original "deleted" file. Only when the process holding the memory mapped to the inode's information on disk exits or executes a new program does the inode get deleted -- but this means that long running daemon programs can hold open a deleted file for as long as they or a forked copy of them remains running. This essentially means that any program that had the deleted file open before it was "replaced", will still be running with the old library, which, effectively, only exists in memory (physically still resides on disk until the last file handle is released, but there's no easy way to get at it -- not impossible, if someone was determined, but not going to happen accidentally. So when you replace a library on disk, the operation, by established norm, uses the O_CREAT flag ask for a new-inode number to store the new file into. An update program could be written -- to controvert established update procedures and open an existing inode to write into -- but *normally*, under linux, this would result in a fatal EACCESS error because you are trying to open a file for write that is mapped or opened as a read-only file. The system won't let an updating program "accidentally" open a file that is 'read-only' locked (due to it being used as paging backing store) in write or update mode. So an update program has to create any update or replacement in a new inode -- preventing garbage being written into a file actively being used as page backing (note: this wasn't always true on all unix variants, but has been on linux for as long as I remember). Any new file doesn't overwrite or upset running programs. Any program dependent on that file (by the same name) will run with the new information if it is started after the file is updated. If the library file has the same name (implying same version), the interface *should* be binary compatible. If it's not, that would *likely* be a bug. If you upgrade the version of a library, it will have a new number. Any existing installed rpm will complain about your removal of the old version-number when you try to replace the library. Thus, if you follow the rpm's "advice" (which, unfortunately is more often than not, over-conservative in its requirements), you will have to install a "side-by-side" compatibility library that will have the same version number as the previous library. It's design intent is to provide binary compatibility for older applications that might want to use the old version of the library. Thus, in one rpm command, glibc can be updated, without updating any old programs as long as the new compatibility library package is installed as well. I'm sorry if I told you stuff you already knew, but I wanted to make sure you understood my 'fantasy' and why in my world, I can replace glibc with a new version and the new version's legacy compatibility file, and still have all the applications, new and old, play nicely together.
We've explained it to you several different ways. You're just not seeing it.
---- None of you have "explained" it nor laid out in what way the above, normally working update process has been broken by SuSE. If you have knowledge of deliberate acts or decisions by SuSE to defeat the normal linux memory and file mechanisms, it's possible to pervert the normal process and purposefully create failure. But that would be 'evil'...of course we aren't talking about google here, and SuSE has been 'touched' by Microsoft, so...I suppose anything is possible, but FWIW, I usually run my own, mostly vanilla kernel that I build myself, so I at least have some clue about how the OS should be functioning at the lowest level (obviously the specific code changes frequently, but certain features should remain compatible between versions or it wouldn't be behaving like Linux).
If you are really convinces that we're all wrong, and insist on doing this crazy thing...well, have fun on the job hunt.
If you are really convinced that linux doesn't work as I have described, then indeed, I'm woefully outdated in my technical knowledge and would appreciate if you could tell me why SuSE would deliberately work to override normal unix/linux mechanisms that protect against garbage being written into a file's paging store. Moreover, if you would explain how they do it and how it works on a standard "vanilla" linux kernel I'm I'm not the only person who would be very interested to know. Please tell, otherwise, please don't vehemently proclaim that you know something won't work -- especially when multiple people already claim to have done it. If this list is intended to be only for/by "IT" people, I guess I'm confused and lost. I would have thought people who adamantly insist on IT protocol being the only one-true-right way would be on the SUSE-enterprise list, not the "Open" SuSE' list. I'm sorry for being somewhat abrasive, as, from my perspective, you are telling me that if I try to sail around the world, I'll fall off the edge when I believe I have already sailed around the world. In your defense, you may be seeing me as someone who's claiming to be able to flap their arms and fly...but I assure you, I have to flap my wings just like everyone else! :-) -l -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org