On 12/6/2011 12:08 PM, Greg KH wrote:
On Tue, Dec 06, 2011 at 05:32:55PM +0100, Marcus Meissner wrote:
On Tue, Dec 06, 2011 at 07:37:10AM -0800, Greg KH wrote: ...
Again, what specifically is wrong with debugfs that is causing problems?
Nothing.
Is it just the fear of the unknown?
The fear of the yet undiscovered problems.
That fear will always be there, it conflicts with the need/want for new features and functionality.
This "fear of the unknown" for a feature of the kernel that has been there for a very long time is quite strange to me.
And again, if there are problems found with any type of security related information leakage that should not be there in debugfs, let us know, it will get fixed.
But don't outright ban the thing just because you are "afraid" of it, that's wrong.
Please try to think as a security worker for a short moment...
Please remember that my first full-time Linux job was as a security worker, I know this field very well. Because of that job, the whole LSM layer in the kernel was created to try to mitigate these types of problems, along with the product that today is called apparmor. That tool does mitigate the attack surface very well, and we support it for people who are worried about just such things.
"If there are problems, tell us, we fix it" ... this is the way the security world works today (and it works basically).
But this is a huge and ever turning treadmill where we (security and developers) can barely keep up running.
What we (security and likely our users) want is a smaller or lower running treadmill.
Of course, but then again, you need to balance it with the need of those same users for those new features and requirements.
To shut down whole subsystems of the kernel just because you "fear" it, and have not audited it all to your satisfaction is one reaction, but one that again, goes against what has made Linux successful in the first place (fast moving where competitors did not.)
This means reducing what we call (and should be self explanatory) "attack surface".
And yes, it is fear.
Fear of the "yet unknown security holes the blackhats know about" or for our users the fear of "unknown if hackers have broken in already because we have not all updates or unknown issues."
Do you not manage server(s) and fear such breakins?
I do manage such servers, and I do reduce the attack surface on them. But here you are saying that you want to wholesale remove functionality by default, of systems that have already shipped, just because you now fear the unknown of what is in them.
Why not take the 2-3 days and audit the files to remove that fear that you seem to have. That seems like the more sane aproach here, not going around saying "your 12.1 system is now exposed to the elements, quick, change the defaults because we really have no idea what is going on here!", which is what is happening now, right?
thanks,
greg k-h
Having a lot lot of stuff exposed and believing that it's all ok is fundamentally less secure than not exposing anything in the first place. Rather than say "look it all over and try to find something that is exploitable" instead the more robust method says "find the 3 things some app actually needs, and then only expose thise, and even try to see if there's a way to only expose those few needed things to those few apps that need them." Do you install all possible software on important servers and simply assume that it's all safe and isn't providing the tools for some attack or even some mere accident, or do you install just the software that a given server actually needs to do it's job? Do you enable services you don't actually need on important servers and just assume that since you didn't give anyone logins to them, that no harm is being done by having them sitting there all day every day? Or do you only enable those services that are actually needed for that particular box to do it's job? What is wrong with his argument? Does it not fall under the same category as these examples? I don't think focusing on the word "fear" and all it's implications is a fair treatment of the question. Asking "what's broken?" does not answer the question of "Is this seemingly large area of risk necessary?". You did point out that there is already even more data exposed elsewhere, which doesn't exactly answer the question of why this data is so necessary nor prove that it's so garanteed harmless. As he has now responded, he does in fact consider /proc to be overdue for review also for the same reasons. Just because /proc has been there a long time does not make it less of a potential problem. It just makes it harder to deal with. Maybe proc and debug are actually in some way fundamentally safe without having to worry about them. But I see nothing intrinsically wrong with the question. Suppose today, you go through everything in debugfs and somehow prove there is nothing abuse-able anywhere in there. Suppose tomorrow you perform a kernel update and end up with a new or modified driver that places something new in debugfs that wasn't there yesterday and is a complete gift to hackers. Having no debugfs because you determined that you didn't actually need it would have saved you from getting bitten by the problem without you having to have predicted the problem. This seems like pretty basic math. The concept applies in all of engineering in every field not just security. -- bkw -- To unsubscribe, e-mail: opensuse-factory+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse-factory+owner@opensuse.org