--- Miles Berry <mberry@st-ives.surrey.sch.uk> wrote:
Thomas Adam wrote: If I set up a room full of dual boot machines, I can get NIS to pull usernames and passwords off the server, yes?
Yes, essentially. NFS and NIS/YP together might seem like overkill, though. If you have NIS setup, then the following ought to do it: ypcat passwd | awk -F: '$3 > 499 {print $1}' Which will list all usernames with a UID greater than 499 (this is because usernames with a UID less than 499 are system-users, ones that aren't allowed to login, but provide a base for permissions, etc.) Any user with a UID equal to 500 or above is usually a valid one.
I'll also want roaming desktops and profiles, which again I _think_ NIS/NFS can do by mapping home directories from the server, but correct me if I'm wrong.
You're not wrong. NIS/YP, NFS and Samba generally go well together. What I'll do for you is detail how to setup each, and how they interact if you like. NFS is the trickiest of them all, as NFS has developed UNIX more or less from its inception, so many variants tend to exist as a result, and no two NFS implementations are alike. I should point out that it has been some years since I last used Samba for anything serious, so bear with me. As as Linux is concerned though, there's two flavous of NFS. One which is in kernel-space; the other in user-space. Why is there two? Well, like most things there just is, but this was born more out of a necessity than anything else. Both have advantages over the other -- but the user-land NFS tends to address some issues that the kernel one does not. Unless you're doing anything specific and tricky with NFS, it probably won't matter which one you use. If we take the NFS server first of all, the first thing you'll want to do is install NFS-server. Again, unless you know otherwise, I'd go with the userland variant of it. Setting up the server is relatively straight-forward, you just start it: /etc/init.d/nfs start (or some variant thereof.) There's several parts to NFS, such as portmapper and RPC.mount, and friends. I won't bore you with the low-level details about what their roles are, but they're crucial to NFS' operation. The script in /etc/init.d takes care of starting it all for us. The NFS server, really revolves around configuring /etc/exports -- it is the file which the NFS server will use to specifiy which clients to export the share to (you can think of it as an ACL, except that it isn't.) I should point out (and this is critical) that /etc/exports IS VERY VERY VERY SENSITIVE TO WHITESPACE. If you miss out a space, or don't put one in, or use a tab instead, NFS will silently throw its toys out of the pram, leaving you scratching your head wondering why nothing works, even though the file seems OK. So what does this exports file look like? It's structure is important and generically, it looks like this: /some/nfs/dir machA(optionA,optionB) machB(optionAB,optionBB) You can list as many machines as you like, separated with a space, just as long as they're all on the same line. If you're using DHCP though, this would be a nightmare, so luckily, the use of domain names is allowed, as well as specifying IP subnets for IP blocks, should that be easier for you. As to the options available when exporting, the following can be used: no_root_squash -- This is more of a legacy issue that hangs on from Sun's implementation of NFS came about. What it is, is it maps any root requests via NFS to be run as user 'nobody' on the server, using the *sever's* UID as a mapping. It's not something you'll ever need to worry about, but I'll mention it anyway. ro -- read-only (which, unless otherwise stated, is the default) rw -- read-write. sync, async -- these are related to how the filesystem is updated, based on the write-cache from NFS. async is generally the more dangerous of the two, as it doesn't care when to flush the buffers. You generally won't have to worry about specifying either of these options. Depending upon what you want to do (host-specific of course), you can specify different options for different hosts. I don't wish to insult your intelligence, but this issue of whitespace is an important one... so a valid entry for an export might look like: /home 192.168.0.1(rw) 192.168.0.100(ro) It's important to remember that if you edit /etc/exports whilst NFS is running, you'll need to tell it that you've made changes to it, so that NFS knows about it. That's done via the exportfs command as in: exportfs -ra which should ensure the changes are made effective immediately. As for the NFS-clients, that's much the same as the NFS server. Again, you'll need to ensure that you have the NFS-client software installed, and started: /etc/init.d nfs start (or some variant thereof.) This time, you just have to ensure you tell the client that you're mounting NFS shares. Now, if you're using a stock kernel (that is, one that's been compiled for you), then you'll already have NFS filesystem supported compiled in. If you're using one you compiled yourself, you'll have to ensure that you did compile it in. So, let's assume that (by our example, above) we want to tell machine 192.168.0.1 to mount /home via NFS. To do that we'd edit /etc/fstab, thus: the.nfs.server:/home /home nfs rw,intr,rsize=xxx,wsize=xxx 0 0 (note that the above should all be on one line.) It's not that different from any other line you see in /etc/fstab, except that the IP address of the server, and the directory (which must be declared in /etc/exports) is used, mounting it to "/home" on the local workstation. I want to draw your attention to the fs-mount options. rw is self-explanitory, but there's several options avaiable when mounting: intr -- this is a handy option that basically will hangup (and hence not freeze up) if the NFS link with the server somehow dies. It's useful in some circumstances, but actually is a victim of its own use, sometimes. rsize,wsize -- something you might want to consider if the NFS share is on a high-load demand. These values (in KB, I might add) impose a "limit" on how much data is to be read and write respectively, at any one time. Useful so you don't bottleneck the server. That's more or less it for NFS As always though, there are caveats associated with it, notably the NFS is a black hole (and general PITA) for permissions and security [1]. The classic case is that NFS will throw a wobbly if it finds that UID and GID numbers are not synched on the client and the server using NFS. NFS works by associating (at the lowlevel) permission based by UIDs and GIDs -- it makes sense, but if they don't match up, NFS will just refuse even read permission. So it's something you might want to look into. With that, is permissions in general. You *must* ensure that you have the correct perms set (especially group perms) for the phsical top-level directory that the mount-point on the NFS client will use, and any directories or files thereunder that you wish to give a user permission to. So, NIS/YP. NIS/YP [2] is an old form of password management on Unix systems, but has generally now been superceeded by the Shadow-suite, and friends. That said, it still provides a useful means of mappings of username/passwords, although it is by no means limited to that. As I said earlier, you can easily obtain a list of users on the system via NIS. So, the NIS server, like NFS can be setup in much the same way. The NIS server is controlled via the program "ypserv". Essentially the NIS server's job is to store all of the maps for the clients, in an old file format (DBM). As to how these keys are stored, and defined, depends. There's two versions of YP -- 1.1 and 1.2, and I suspect 1.2 is the de facto now, so I'll concentrate on that. The program "ypinit" is responsible for creating maps to new domains, and "pwupdate" is reponsible for the password mappings only. Thankfully, the source databases (typically /etc/passwd, and /etc/group) need not be "real" (in an OS-operation sense), but it helps with Samba integration if they are.) The command "yptools" also understands shadow file-formats, so that can be used to merge with the NIS/YP database. Setting up NIS is no walk in the park, and it uses Makefiles to generate the various DBM files. What you'll want to do is look in: /var/yp/securenets and: /etc/ypserv.conf and (to an extent): /var/yp/Makefile and edit those as appropriate. I'm being deliberately vague about this point as it is network-topology specific -- a detail which I do not know for your network. :) When you've done that you'll want to generate the new maps: makedbm will take care of that for you. Note that if you do update any files in /var/yp post that command, you just have to run "make" from within it. Don't use ypinit for updating map files, please. You'll want to ensure that ypserv (the NIS server program, if you like) is up and running. It used to be the case that this was done via portmap and rpcinfo(8), but I now believe the script in /etc/init.d takes care of that for you (as it can get messy). Assuming ypserv starts, you'll then want to generate the the NIS database, which the following command achieves: ypinit -m As to the client, that's more simpler (if such a definition can be applied here.) The thing to remember is that you *must* set a valid domainname on the client via the command: /bin/domainname ... if you have not already done so. The crux, then relies in getting GLIBC to use NIS looksups when dealing with network-oriented issues. I don't want to bore you with why, except to say that /etc/nsswitch.conf is the file you want to edit, such that it looks like the following: passwd: files nis shadow: files nis group: files nis The ordering of "files nis" is VERY important -- and nis *must* come after files, else you will get some damn odd results from the NIS client (and other file operations for that matter.) That should be just about it for NIS. You can then use any manner of yp* commands to look up values from the NIS server.
I also want my users to have easy access to three network shares, ideally by desktop icons or something equally obvious (i'm thinking symlinks inside their mapped home directories...), specifically their
That *might* work -- but problems arise with symlinks across mount-points and different file systems.
windows My Documents share (/export/user/staff/mberry), their group share (/export/groups/staff) and the share for the school (/export/all). Ideally I don't want them to have access to the shares belonging to other users or groups! All of which I can do in windows via Samba, but I'm not clear how, or even if, I can do this in a Linux environment via NFS.
Well, what you would do is set (on the NFS server) stringent group perms on the exported directories, and plonk all of the users into the group as a memeber (remembering to update NIS/YP as you go.) That way, if a user is not a memeber of that group, then they can't view it. Does that help? -- Thomas Adam [1] NFS -- "No F**ing Security" (and many permutations...) [2] I use NIS/YP here to distinguish it in clarity from NFS. I technically shouldn't call it YP anymore, as that was dropped due to "Yellow Pages" claiming it's name right. ___________________________________________________________ How much free photo storage do you get? Store your holiday snaps for FREE with Yahoo! Photos http://uk.photos.yahoo.com