On Monday, 4 October 2004 12.24, Örn Hansen wrote:
subroutine(const char *data) { char buffer[BUFSIZ];
... }
would, depending on architecture ... become an assembly code:
allocate cost_char*_data; push return-address. push registers; allocate char_buffer[BUFSIZ];
The above is the stack... the trick here, is that the stack is filled bottom-up, while memory variables are filled top-down. So, writing over buffer[BUFSIZ+12] variables will overwrite the return-adress ( or some saved register). However, this is compiler and architecture specific. Some will save the registers, AFTER they've entered the routine. And your script kiddie, has to have detailed knowledge of the subroutine that he's going to install his rootkit in.
Script kiddies don't have knowledge, they have scripts, which do all the complicated stuff for them. Yes, the script needs to be tuned to the arch under attack (if nothing else, the shell code needs to be in the correct machine language :), but the difficulty level of a "normal" stack exploit is relatively low. It's when you add aspects like "non-executable stack", or red hat's randomized positioning of segments that things get trickier, but it's still not impossible to defeat. A program with a bug is a very difficult thing to protect. If this were as simple as you say, computer security wouldn't be the profitable industry it is
Because, you see ... he can't have the program just return to anywhere ... that's impossible. He also has to have detailed knowledge of the processor involved, in assembly code ... as the return adress information has to include information for the memory controller, as well ... not merely the cpu, those days are long since over.
Did you read "Smashing the stack for fun and profit" by aleph1? It's an old article, but it's mostly still valid
All in all ... the script kiddie explanation, as I told someone earlier ... is total TB ... or Tom Bluhr. I say, it's a TB because it's not merely a deliberate lie ... it's intended to obscure the facts. In the case of closed platforms, like Windows ... you have to be in the loop of development, to know the exact size of the buffer variable. It's not always BUFSIZ long, you know ... it rarely is. In the case of Open Source, this can be a problem ... since people only have to read the source, to know. Yet, in open source, the code changes rapidly ... so you have to know what version the other side is running, as well. Which means, you have to be pretty competent to do it ... and it's usually beyond the average script kiddie.
I'm not sure what you're saying here. It sounds like "it's more difficult in windows, but it's more difficult in open source" In any case, the real advantage isn't that the bugs are harder to exploit, it's that they're much easier to fix. Show me a bug in, for example, apache and give me a few hours (days?) and it will be fixed. Show me a bug in IIS and watch me sigh as I wait for a reply from MS support. It's difficult to recompile something if you don't have source code
The solution, is very simple ...
subroutine(const char *data) { char *buffer;
buffer = malloc(BUFSIZ); ... free(buffer); }
In the above case, the memory will be allocated on the heap and will consist of a memory allocation block, in 4096 byte blocks which are limited to the current process only. No program code, is within that block or near it ... that is accessable through the memory variable. The memory controller, will give a segmentation violation if the buffer is overrun into the next block, which may contain program code.
It's a little more complicated than that. Google around for "heap overflow".
I really don't think that there are many programs out there, today ... that are mission critical, that are vulnerable to buffer attacks.
I fear that may be wishful thinking