Perhaps something like this is what you're thinking of? http://www.infrastructures.org/papers/bootstrap/bootstrap.html I saw this the other week on RootPrompt, and thought it was pretty neat. I'd like to get there one day. Which will be one day _after_ I figure out how to get everything running the way I want it in the first place. By then, I may be too scared to tear up my hard work and start from scratch to do a complete overhaul to a system like that. But hopefully by then, LinuxNOW will be functional, w/ a SuSE version ;) Monte Gerhard Sittig wrote:
On Fri, Aug 11, 2000 at 14:17 -0600, Kurt Seifried wrote:
And don't believe in "automated security". I feel quite strong about that automatic updates won't work without heavy human supervision. :) Having your system (potentially) damaged by a simple minded program sucking in every update unchecked just because "the file was there and I felt like applying it" is not fun. When something breaks, *I* want to be the reason why. :>
Security has to be automated as much as possible. What happens when companies roll out 5000 linux desktops?
I guess I put it into the wrong words ...
I meant that I don't believe in automatic upgrading from an external source. I want to be the instance making the decision about how and most of all _when_ to break a system (or to risk breaking it) by updating.
Admittedly I've never been in the above position to handle a few thousands of installations. Neither did I get close to this in any way. :) But I could imagine grouping these machines and applying the updates in steps to see how they react. In a perfect world the updates only remove some remedies and plainly work. In the real world these updates have side effects and you don't want to break *all* machines at _once._ There have been - and always will be - updates which just don't work (quick response in an attempt to help before the "real" fix is found and tested) and updates which change some behaviour other software has to be obeying too.
To put it short: At the very least I would like to to have a filter _which_ updates (of all the availables) get applied and _when_ I want to risk updating my machines. This involves a test beforehand on isolated machines to recognize when a fix for one thing breaks other things or doesn't fix something. I don't believe in a distributor to autodeliver updates to me, I still see this as a service and an offer and I'm still the one to accept or delay or reject.
Of course(?) once I want to apply an update, I'm still free to do so automatically inside my reach, from a source *I* define and for a set of machines I declare (and these could be all machines, as well). We agree that physically stepping up in front of more than ten machines is something no admin would like to do and even doing so via network will raise the feeling "there should be a different way without me sitting here and waiting for the computer(s) to finish". :) And I'm aware that delayed application of available fixes to known problems leave a window of possible vulnerability. But I don't want to be forced to accept fixes' downsides just because a fix is published.
Maybe we're talking about different granularity and release schemes here? Maybe you put more trust into the fix' publisher than I do? But surely I speak as someone who never had to handle more than some twenty machines at once.
virtually yours 82D1 9B9C 01DC 4FB4 D7B4 61BE 3F49 4F77 72DE DA76 Gerhard Sittig true | mail -s "get gpg key" Gerhard.Sittig@gmx.net -- If you don't understand or are scared by any of the above ask your parents or an adult to help you.
--------------------------------------------------------------------- To unsubscribe, e-mail: suse-security-unsubscribe@suse.com For additional commands, e-mail: suse-security-help@suse.com
-- The Law of Unintended Consequences: Whether or not what you do has the effet you want, it will have three at least you never expected, and one of those usually unpleasant. Siuan Sanche, Aes Sedai From 'The Path of Daggers', Book Eight of the Wheel of Time Series by Robert Jordan.