Re: local user can delete arbitrary files on SuSE-Linux (fwd)
Excuse-me please! It seems, that my little patch was not a so good idea! So, thanks to Pavel Kankovsky and to James Antill! Peter -- Peter Münster http://gmv.spm.univ-rennes1.fr/~peter/ ---------- Forwarded message ---------- Date: Sun, 23 Apr 2000 00:03:04 +0200 From: Pavel Kankovsky <peak@ARGO.TROJA.MFF.CUNI.CZ> To: BUGTRAQ@SECURITYFOCUS.COM Subject: Re: local user can delete arbitrary files on SuSE-Linux On Fri, 21 Apr 2000, [ISO-8859-1] Peter Münster wrote:
If MAX_DAYS_IN_TMP > 0 in /etc/rc.config on a SuSE-Linux system, a local user can delete arbitrary files by doing some commands like these: mkdir -p "/tmp/hhh /somedirectory" touch -t some-early-date "/tmp/hhh /somedirectory/somefile" sleep 1d ... Here a possible patch for suse-package aaa_base-2000.1.3-0: ... + find $TMP_DIR/. $OMIT ! -type d \ + -atime +$MAX_DAYS_IN_TMP -exec rm -f '{}' ';' + find $TMP_DIR/. $OMIT -depth -type d -empty -mindepth 1 \ + -mtime +$MAX_DAYS_IN_TMP -exec rmdir '{}' ';'
mkdir -p /tmp/somedirectory/{_junk,bin} fill_with_lots_of_junk_to_slow_find_down /tmp/somedirectory/_junk find /tmp/somedirectory -type f | xargs touch -t some-early-date touch -t some-early-date /tmp/somedirectory/bin/sh wait_until_aaa_base_starts_searching /tmp/somedirectory/_junk mv /tmp/somedirectory /tmp/somedirectory2 ln -s / /tmp/somedirectory watch /bin/sh disappear...this will teach you not to use find and rm to clean /tmp :) --Pavel Kankovsky aka Peak [ Boycott Microsoft--http://www.vcnet.com/bms ] "Resistance is futile. Open your source code and prepare for assimilation." ---------- Forwarded message ---------- Date: 21 Apr 2000 16:20:24 -0400 From: James Antill <james@and.org> To: Peter Münster <peter@gmv.spm.univ-rennes1.fr> Cc: security@suse.com Subject: Re: local user can delete arbitrary files on SuSE-Linux Peter Münster <peter@GMV.SPM.UNIV-RENNES1.FR> writes:
Hello, If MAX_DAYS_IN_TMP > 0 in /etc/rc.config on a SuSE-Linux system, a local user can delete arbitrary files by doing some commands like these: mkdir -p "/tmp/hhh /somedirectory" touch -t some-early-date "/tmp/hhh /somedirectory/somefile" sleep 1d
The bug is in /etc/cron.daily/aaa_base for SuSE version 6.3 or perhaps also in /root/bin/cron.daily for older SuSE versions. Tested on SuSE 6.0 and 6.3 but probably existent on earlier versions.
Here a possible patch for suse-package aaa_base-2000.1.3-0:
--- aaa_base~ Mon Jan 3 18:16:55 2000 +++ aaa_base Fri Apr 21 08:42:19 2000 @@ -158,20 +158,10 @@ done
for TMP_DIR in $TMP_DIRS_TO_CLEAR ; do - for DEL_FILE in `find $TMP_DIR/. $OMIT \( -type f -o -type l \) \ - -atime +$MAX_DAYS_IN_TMP | sort -r` ; do - rm -f $DEL_FILE - DEL_DIR=`dirname $DEL_FILE` - if [ "$DEL_DIR" != "$TMP_DIR/." ] ; then - rmdir $DEL_DIR 2> /dev/null - fi - done - done
Oh dear :O ... apart from the above flaw (--print0 anyone) above this is _very_ raceable by doing... mkdir -p /tmp/blah/1 mkdir -p /tmp/blah/2 mkdir -p /tmp/blah/3 Do the above a couple of thousand times... touch /tmp/blah/199/passwd wait for the find part of the script to come along... mv /tmp/blah/199 /tmp/blah/.199 && ln -s /etc /tmp/blah/199
- for DEL_DIR in `find $TMP_DIR/. $OMIT \( -type d \) \ - -ctime +$MAX_DAYS_IN_TMP | sort -r` ; do - if [ "$DEL_DIR" != "$TMP_DIR/." ] ; then - rmdir $DEL_DIR 2> /dev/null - fi + find $TMP_DIR/. $OMIT ! -type d \ + -atime +$MAX_DAYS_IN_TMP -exec rm -f '{}' ';' + find $TMP_DIR/. $OMIT -depth -type d -empty -mindepth 1 \ + -mtime +$MAX_DAYS_IN_TMP -exec rmdir '{}' ';' done fi
This doesn't fix it either, it just makes the race smaller. -- James Antill -- james@and.org "If we can't keep this sort of thing out of the kernel, we might as well pack it up and go run Solaris." -- Larry McVoy.
On Mon, Apr 24, 2000 at 20:48 +0200, Peter Münster wrote:
Excuse-me please! It seems, that my little patch was not a so good idea! So, thanks to Pavel Kankovsky and to James Antill! Peter
But even if "find ... -exec rm" or "find ... | xargs rm" is not correct, "for FILE in `find ...`" isn't either. There *must* be something done to make it work for larger file sets. But all I can come up with is something in the form of ls_or_find | while read FILE; do operate_on $FILE done but it still is a lengthy process concerning time. What's The Right Way(TM) to go? I feel this kind of problem should have been around often enough so that there could be proven solutions. I guess the above mentioned "operate_on" procedure has to be a little more complex to ensure that $FILE is still inside the tree the ls/find operation initially started on (eliminating tricks like "some/../path" etc) and the find operation has to avoid following links. Is there a "clever" way to first collect the file names and then process them at once (maybe with a sanity check before and after collection with dropping results upon mismatch)? I could only think of a list file which opens up another can of worms :( virtually yours 82D1 9B9C 01DC 4FB4 D7B4 61BE 3F49 4F77 72DE DA76 Gerhard Sittig true | mail -s "get gpg key" Gerhard.Sittig@gmx.net -- If you don't understand or are scared by any of the above ask your parents or an adult to help you.
On Mon, 24 Apr 2000, Gerhard Sittig wrote:
but it still is a lengthy process concerning time. What's The Right Way(TM) to go? I feel this kind of problem should have been around often enough so that there could be proven solutions.
Hello Gerhard, today I got another reply from Valdis Kletnieks with a lot of more detailed explanations. I forward the message. Cheers, Peter -- Peter Münster http://gmv.spm.univ-rennes1.fr/~peter/ ---------- Forwarded message ---------- Date: Mon, 24 Apr 2000 14:56:59 -0400 From: Valdis.Kletnieks@vt.edu To: Peter Münster <peter@GMV.SPM.univ-rennes1.fr> Cc: BUGTRAQ@SECURITYFOCUS.COM Subject: Re: local user can delete arbitrary files on SuSE-Linux On Fri, 21 Apr 2000 08:48:55 +0200, =?ISO-8859-1?Q?Peter_M=FCnster?= <peter@GMV.SPM.UNIV-RENNES1.FR> said:
+ find $TMP_DIR/. $OMIT ! -type d \ + -atime +$MAX_DAYS_IN_TMP -exec rm -f '{}' ';' + find $TMP_DIR/. $OMIT -depth -type d -empty -mindepth 1 \ + -mtime +$MAX_DAYS_IN_TMP -exec rmdir '{}' ';'
OK.. It must be summertime, and time for re-runs. This was thrashed to death on Bugtraq some 6 years ago. I'm appending a re-run of the note from LAST time... ;) -- Valdis Kletnieks Operating Systems Analyst Virginia Tech --- start appended note Approved-By: Aleph One <aleph1@DFW.NET> Message-ID: <199605211710.NAA25637@myrus.com> Date: Tue, 21 May 1996 18:59:19 -0500 Reply-To: Bugtraq List <BUGTRAQ@NETSPACE.ORG> Sender: Bugtraq List <BUGTRAQ@NETSPACE.ORG> From: Zygo Blaxell <zblaxell@myrus.com> Subject: [linux-security] Things NOT to put in root's crontab To: Multiple recipients of list BUGTRAQ <BUGTRAQ@NETSPACE.ORG> Sigh. Here are several things I've just removed from /etc/crontab on every RedHat Linux system I can get my hands on. They contain security holes related to the use of 'find' and 'rm' to expire old files in /tmp and other places. It seems that awareness of this type of security problem is rather low, so I'll explain the class of problem and how to fix it.
From Redhat's /etc/crontab file: # Remove /var/tmp files not accessed in 10 days 43 02 * * * root find /var/tmp/* -atime +3 -exec rm -f {} \; 2> /dev/null
# Remove /tmp files not accessed in 10 days # I commented out this line because I tend to "store" stuff in /tmp # 41 02 * * * root find /tmp/* -atime +10 -exec rm -f {} \; 2> /dev/null
# Remove formatted man pages not accessed in 10 days 39 02 * * * root find /var/catman/cat?/* -atime +10 -exec rm -f {} \; 2> /dev/null
# Remove and TeX fonts not used in 10 days 35 02 * * * root find /var/lib/texmf/* -type f -atime +10 -exec rm -f {} \; 2> /dev/null
Folks, do NOT use 'find' on a public directory with '-exec rm -f' as root. Period. Ever. Delete it from your crontab *now* and finish reading the rest of this message later. * PROBLEM DISCUSSION AND EXPLOITATION The immediate security problem is that 'rm' doesn't check that components of the directory name are not symlinks. This means that you can delete any file on the system; indeed, with a little work you can delete *every* file on the system, provided that you can determine the file names (though you might be limited to deleting files more than ten days old). First, create the directories and file: /tmp/hacker-fest/some/arbitrary/set/of/path/names/etc/passwd where all but the last component is a directory. Be ready to replace 'etc' with a symlink to '/etc', so that: /tmp/hacker-fest/some/arbitrary/set/of/path/names/etc -> /etc i.e. the path components of the file name will point to a file named 'passwd' in a different directory. If the replacement operation occurs between when 'find' sets {} to "/tmp/hacker...etc/passwd" and when 'rm' calls unlink on "/tmp/hacker...etc/passwd", then rm will in fact delete '/etc/passwd', and not a file in /tmp. Deleting other files is left as an exercise. The race condition is really easy to win. Create a directory with 400 path components, like this: /tmp/hacker-fest/a/a/a/a/a/a/a.../a/a/a/etc/passwd (1) Then arrange for each of the 'a' components to be a symlink to a directory somewhere near the bottom of a similar tree. For example, /tmp/hacker-fest/a could be a symlink to /tmp/hacker-fest/b/b/b/b/b/b/b/b/b/.../b/b/b/b/b/b/a which could be a symlink to /tmp/hacker-fest/c/c/c/c/c/c/.../c/c/c/c/c/c/c and so on. In fact, *each* path component can be a symlink up to about 8 levels or so. Any operation such as stat(), open(), lstat(), etc. on one of these pathnames will cause the kernel to follow each and every symlink. The difference between lstat() and stat() in this case is that lstat() will not follow the *last* symlink. This will make lstat() and friends *extremely* slow, on the order of several *minutes* per lstat() operation, because each lstat() is now reading in several thousand inodes and disk blocks. If you fill each directory with several hundred entries, then create the entry you want, then delete the others, you force the kernel to waste its time reading kilobytes of empty directory blocks--in fact, you can make one stat() or unlink() operation read almost the entire disk in an order designed to maximize disk head motion if you know what you're doing. If you have an NFS, CDROM, or floppy-disk filesystem handy, you can get *weeks* per lstat(). Of course, 'find' will normally see the first symlink and stop. To prevent this, you rename the original directory (at (1) above) and create another directory with the same name and about 5000 empty files, some of which have the same name as files you want to delete. Note that these 5000 empty files can all be hard links to the same file, to save precious inodes for more of those symlinks. 'find' will spend considerable time iterating through these 5000 files. When it does (you'll be able to tell because the atime of the directory changes as find reads it), put the directory with the millions of symlinks at (1) back with a couple of rename operations. Some versions of 'find' will not be adversely impacted by this, but 'rm' definitely will. It is usually sufficient to simply create the 400-component-long directory, put 5000 files in it, wait for the atime of the directory to change, then do the rename so that 'rm' follows a symlink. I used this technique to remove /etc/crontab as a test case. If you have: /tmp/hacker-fest/a/a/a/a/a/.../a/etc/passwd (and 5000+ other files) /tmp/hacker-fest/a/a/a/a/a/.../a/usr where 'usr' is a symlink to '/usr', you can get some implementations of find to start recursing through /usr as well. * OTHER PROBLEMS WITH THIS CRONTAB A user can set the atime of any file they own to an arbitrary value, and that programs like zip, tar, and cpio will do this for you automatically; this makes 'atime' an almost useless indicator of when a file was last used ('mtime' has the same problem). Either the file will be deleted too early, because it was extracted from an archive using a program that preserves timestamps, or users can set the atime to well into the future and use /tmp space indefinitely. The later of ctime (to detect writes) and atime (to detect reads; must check that atime is not in the future) is a good indicator of when a file was last used. Miscellaneous bugs: the use of '*' means that files in a directory named '.foo' will never be cleaned (and you can prevent 'find' from working at all by putting more than 1020 files in /tmp). There are subdirectories of /var/catman that aren't properly handled by the 'find' command given (local and X11). You can't delete a directory with 'rm -f'. In other words, not only is RedHat's /etc/crontab a major security hole, it doesn't actually work properly, either. :( * FIXES The easiest way to fix this is to get rid of the find/rm stuff completely. If you need a garbage collector, try our LRU garbage collection daemon at the URL given below. Adding a system call that sets a flag that prevents a process from being able to ever follow a symlink would be non-portable, but efficient and effective. The next easiest way to fix this is to replace 'rm' with a program that does not follow symlinks. It must check that each filename component in turn by doing an lstat() of the directory, chdir() into the directory, and further lstat()s to check that the device/inode number of '.' is the same as the directory's device/inode number before chdir(). The parameter of the 'unlink' or 'rmdir' system call must not contain a slash; if it does, then the directory name before the slash can be replaced by a symlink to a different directory between verification of path components and the actual unlink() call. Another way to fix this is with a smarter version of find. A smart find does the chdir() and lstat() checks to make sure that it never crosses a symlink, and calls the program in 'exec' using a filename with no directory components, relative to the current directory. Thus, to delete: /tmp/hacker-fest/a/a/a/a/a/.../etc/passwd find first carefully (checking for attempts to exploit race conditions before and *after* each chdir()) chdir()s into /tmp/hacker-fest/a/a/a/a/a/.../etc and will fail if any of the components is a symlink, plugging the hole described above. After verifying that the '.../etc' is really a subdirectory of /tmp, and not some random point on the filesystem, find exec's the command: rm -f ./passwd which is secure as long as '.' isn't in your PATH. Note the leading './' to prevent rm from interpreting the filename as a parameter. Note: this is in *addition* to the checks that find already makes to determine whether a file is a symlink *before* chdir()ing into it. It must make sure that components of the path that have *already* been tested are not replaced with symlinks or renamed directories *after* find has started processing subdirectories of them. Note that the 'smart' find without the post-chdir symlink tests won't work. While smart-find is processing: /tmp/hacker-fest/a/a/a/a/* you can rename /tmp/hacker-fest/a/a/a/a to /tmp/hacker-fest/a/a/b (note: one less pathname component) and eventually smart-find will 'cd ..', but since the current directory of find has moved, '..' will move as well, and eventually smart-find will be one level too high and can start descending into other subdirectories of '/'. To help this along you may need to create: /tmp/hacker-fest/usr /tmp/hacker-fest/var etc. * SAFE LRU GARBAGE COLLECTION Our LRU /tmp garbage collector daemon is available at <URL:http://www.ultratech.net/~zblaxell/admin_utils/filereaper.txt>. It is implemented in perl5. It depends on a Linux-specific 'statfs()' system call to monitor available free space, so non-Linux people will need to do a port (send me patches and I'll incorporate them). Our garbage collector: handles the above security problems correctly, handles pathnames more than 1024 characters, uses smarter last-access estimates than just atime or ctime, can support "permanent" subdirectories, handles files, symlinks, directories, devices, mount points correctly, can support minimum age of files (e.g. no files < 1 day old), deletes oldest files first, deletes files only when disk space is low, and responds in less than ten seconds to low disk space conditions. Our garbage collector works on any directory where files can gracefully disappear at arbitrary times, such as /var/catman, /tmp, /var/tmp, TeX font directories, and our HTTP proxy cache. One directory where the garbage collector doesn't work very well is /var/spool/news; we had to hack things up a bit to fix the article databases when article files disappear. -- Zygo Blaxell. Former Unix/soft/hardware guru, U of Waterloo Computer Science Club. Current sysadmin for Myrus Design, Inc. 10th place, ACM Intl Collegiate Programming Contest Finals, 1994. Administer Linux nets for food, clothing, and anime. "I gave up $1000 to avoid working on windoze... *sigh*" - Amy Fong
participants (2)
-
Gerhard Sittig
-
Peter Münster