Hi all, I was wondering, there are several ways to separate the machine where storage for home directories are provided from the place where it is needed. This is needed in case you have multiple (virtual) desktop machines, where a number of people could log in. Most obvious techniques are nfs and smb, and perhaps if you can enforce that no one is ever logged in twice, one might use raw block devices. Point that kept me awake, is the scalability: it might very well work for ten or hundred users, but what when confronted with 1000, 10.000 or more users? At the desktop-side, i presume you have to limit the amount of simultaneously logged in users to 100-200 (depending on the amount of mem in a server). And with pam, you can mount the user dir when needed. But how about the storage side? At one end of the spectrum you can have a single NFS/SMB server exporting the whole /home. While at the other end you could export each user's home directory individually. Clearly, from the point of view of load and availablity single exporting servers should be ruled out. But how many exports is feasable? And how many nfs/smb-services can you have on a single server? afaicr, KVM or XEN create far too much overhead for separation, but perhaps nfs/smb server(s) can be placed into LXC-containers?? On purpose i leave out a number of elementary items, like the storage itself, network itself and geographically redundancy. Any admin around here confronted with likewise questions? If there are any rules-of-thumbs, i'd like to know, as it kept me awake all night ;-) My team and i are not responsible for building such environment, just for demo's and proof-of-concepts, but these have to be realistic and scalable. Hans -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 14/05/13 09:24, Hans Witvliet wrote:
Hi all,
But how about the storage side? At one end of the spectrum you can have a single NFS/SMB server exporting the whole /home. While at the other end you could export each user's home directory individually.
Hi cifs fileserver to 2000 students sharing 110 Linux and windows clients. Even with a small number of clients, it's not possible for us to export the whole of home and have it mounted all the time. We had to use the automounter to get anywhere near. With decent hardware I get the feeling you could get away with the single mount. It is a lot easier. Our AD schema doesn't have automount. L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Am 14.05.2013 09:44, schrieb lynn:
On 14/05/13 09:24, Hans Witvliet wrote:
Hi all,
But how about the storage side? At one end of the spectrum you can have a single NFS/SMB server exporting the whole /home. While at the other end you could export each user's home directory individually.
Hi cifs fileserver to 2000 students sharing 110 Linux and windows clients. Even with a small number of clients, it's not possible for us to export the whole of home and have it mounted all the time. We had to use the automounter to get anywhere near. With decent hardware I get the feeling you could get away with the single mount. It is a lot easier. Our AD schema doesn't have automount. L x
I like to you a custom pam module to solve such a problem. Here is an old article: http://www.novell.com/coolsolutions/appnote/17107.html If you want to see it in action, you can try the newest version of our project desktop4education: http://proxy.asn-graz.ac.at/~d4e/experimental/ Project website: http://d4e.at <--- sorry, it's all in german because we've developed the project for the Austrian Federal Ministry for Education, the Arts and Culture. -- Matthias -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Hans Witvliet wrote:
Hi all,
I was wondering, there are several ways to separate the machine where storage for home directories are provided from the place where it is needed. This is needed in case you have multiple (virtual) desktop machines, where a number of people could log in.
Most obvious techniques are nfs and smb, and perhaps if you can enforce that no one is ever logged in twice, one might use raw block devices.
Point that kept me awake, is the scalability: it might very well work for ten or hundred users, but what when confronted with 1000, 10.000 or more users?
At the desktop-side, i presume you have to limit the amount of simultaneously logged in users to 100-200 (depending on the amount of mem in a server). And with pam, you can mount the user dir when needed.
automount works very well too when working with individual mounts.
Clearly, from the point of view of load and availablity single exporting servers should be ruled out. But how many exports is feasable?
And how many nfs/smb-services can you have on a single server?
Depends on the server :-) -- Per Jessen, Zürich (14.3°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----Original Message----- From: Per Jessen <per@computer.org> To: opensuse@opensuse.org Subject: Re: [opensuse] home dirs Date: Tue, 14 May 2013 10:33:14 +0200 Hans Witvliet wrote:
Hi all,
I was wondering, there are several ways to separate the machine where storage for home directories are provided from the place where it is needed. This is needed in case you have multiple (virtual) desktop machines, where a number of people could log in.
Most obvious techniques are nfs and smb, and perhaps if you can enforce that no one is ever logged in twice, one might use raw block devices.
Point that kept me awake, is the scalability: it might very well work for ten or hundred users, but what when confronted with 1000, 10.000 or more users?
At the desktop-side, i presume you have to limit the amount of simultaneously logged in users to 100-200 (depending on the amount of mem in a server). And with pam, you can mount the user dir when needed.
automount works very well too when working with individual mounts.
Clearly, from the point of view of load and availablity single exporting servers should be ruled out. But how many exports is feasable?
And how many nfs/smb-services can you have on a single server?
Depends on the server :-) -----Original Message----- Hi Anton, David, Per, Lynn and all others... Thanks also for the trip down to memorylane: been there, done that, specially Hummingbird ;-) And no problems with articles in other languages, as long as i dont have to embarresh myself by trying to speak French or so... Yes, i intend to use the automount, so user's dirs are not mounted when not needed. It is not so much the client i'm worried about: There it is just adding more clients behind a load balancer. I was warned that i should not use ext3 for /home, as you are limited to 30,000 subdirs. For ext4 i know that limit is raised to 60,000. Is there a known ceiling for btrfs? For a very small-scale demo, i can use a 4TB nas, but i can probably use a 9TB IBM SAN with double nics For another project, i used a single NFS-server exporting home-dirs only holding NX-config and firefox-cach. But on HP-Proliant-G7 with sles, we ran into lots of (performance) problems. So this time i would like to avoid it, with an overdimensioned demo, instead of the ducktape afterwards. I understood that it is hard to transparantly balance NFS-srvers, that i should avoid it, by specifying different homedirs from ldap. I was told that this is the way they do this in a Windows environment. Can i also do this for Linux. And i forgot to mention, I start with a very thin-client (only nx/x2go/thinlinc) and they get to choose between an virtualised remote XP/W7 desktop or an LXDE desktop. And either desktop must be provided with home-storage. Hans -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 15/05/13 11:25, Hans Witvliet wrote:
-----Original Message----- From: Per Jessen <per@computer.org> To: opensuse@opensuse.org Subject: Re: [opensuse] home dirs Date: Tue, 14 May 2013 10:33:14 +0200
Hans Witvliet wrote:
Hi all,
I was wondering, there are several ways to separate the machine where storage for home directories are provided from the place where it is needed. This is needed in case you have multiple (virtual) desktop machines, where a number of people could log in.
Most obvious techniques are nfs and smb, and perhaps if you can enforce that no one is ever logged in twice, one might use raw block devices.
Point that kept me awake, is the scalability: it might very well work for ten or hundred users, but what when confronted with 1000, 10.000 or more users?
At the desktop-side, i presume you have to limit the amount of simultaneously logged in users to 100-200 (depending on the amount of mem in a server). And with pam, you can mount the user dir when needed.
automount works very well too when working with individual mounts.
Clearly, from the point of view of load and availablity single exporting servers should be ruled out. But how many exports is feasable?
And how many nfs/smb-services can you have on a single server?
Depends on the server :-)
-----Original Message-----
And i forgot to mention, I start with a very thin-client (only nx/x2go/thinlinc) and they get to choose between an virtualised remote XP/W7 desktop or an LXDE desktop. And either desktop must be provided with home-storage.
Ooo, there's a gotcha there concerning file locking. We've never got nfs/cifs locking to work. We had to go cifs only if we wanted file locking to just work between LXDE/xp/w7. This really is a killer with m$ office and Libreoffice docs. The good news is that cifs on openSUSE is great these days, Kerberos, autofs; the works. L x -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Hans Witvliet wrote:
Hi Anton, David, Per, Lynn and all others...
[snip]
It is not so much the client i'm worried about: There it is just adding more clients behind a load balancer. I was warned that i should not use ext3 for /home, as you are limited to 30,000 subdirs. For ext4 i know that limit is raised to 60,000. Is there a known ceiling for btrfs?
Dunno, but jfs and xfs don't have one.
For another project, i used a single NFS-server exporting home-dirs only holding NX-config and firefox-cach. But on HP-Proliant-G7 with sles, we ran into lots of (performance) problems.
On which kind of Proliant? There are many G7s :-) -- Per Jessen, Zürich (20.3°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Per Jessen wrote:
Hans Witvliet wrote:
Hi Anton, David, Per, Lynn and all others...
[snip]
It is not so much the client i'm worried about: There it is just adding more clients behind a load balancer. I was warned that i should not use ext3 for /home, as you are limited to 30,000 subdirs. For ext4 i know that limit is raised to 60,000. Is there a known ceiling for btrfs?
Dunno, but jfs and xfs don't have one.
Dunno whether reiser has one, but it's certainly got a few more zeros on the end if it has. And there are/were any number of Unix boxes around where there were 26 subdirectories of /home (a-z) and user home directories were inside those. Of course the scheme can be extended to be as baroque as you like. Keeps sysadmins in employment adjusting lots of scripts. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Dave, et al -- ...and then Dave Howorth said... % ... % And there are/were any number of Unix boxes around where there were 26 % subdirectories of /home (a-z) and user home directories were inside Yeah; that's one way. % those. Of course the scheme can be extended to be as baroque as you % like. Keeps sysadmins in employment adjusting lots of scripts. Oh, pooh. That's what automation is for :-) HAND :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt
David T-G wrote:
Dave, et al --
...and then Dave Howorth said... % ... % And there are/were any number of Unix boxes around where there were 26 % subdirectories of /home (a-z) and user home directories were inside
Yeah; that's one way.
% those. Of course the scheme can be extended to be as baroque as you % like. Keeps sysadmins in employment adjusting lots of scripts.
Oh, pooh. That's what automation is for :-)
Which automation is that then? What rpm is it in? For example, which preexisting command or script would allow me to create a new user in a scheme with a-f/ g-l/ m-r/ s-z/ subdirectories without running the risk of a typo? Does YaST do that out of the box? That's why admins need to write or tweak scripts. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Dave -- ...and then Dave Howorth said... % % David T-G wrote: % > Dave, et al -- % > % > ...and then Dave Howorth said... % > % % > ... ... % > % those. Of course the scheme can be extended to be as baroque as you % > % like. Keeps sysadmins in employment adjusting lots of scripts. % > % > Oh, pooh. That's what automation is for :-) % % Which automation is that then? What rpm is it in? % ... % That's why admins need to write or tweak scripts. Scripts, yes. Lots of scripts, not so much, at least not for one task. I simply meant that you write it once and then it works for every user you set up. HAND :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt
David T-G wrote:
Dave --
...and then Dave Howorth said... % % David T-G wrote: % > Dave, et al -- % > % > ...and then Dave Howorth said... % > % % > ... ... % > % those. Of course the scheme can be extended to be as baroque as you % > % like. Keeps sysadmins in employment adjusting lots of scripts. % > % > Oh, pooh. That's what automation is for :-) % % Which automation is that then? What rpm is it in? % ... % That's why admins need to write or tweak scripts.
Scripts, yes. Lots of scripts, not so much, at least not for one task. I simply meant that you write it once and then it works for every user you set up.
I guess you've never had to live with this situation then.
HAND
:-D
-- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Per Jessen said the following on 05/15/2013 10:45 AM:
Hans Witvliet wrote:
Hi Anton, David, Per, Lynn and all others...
[snip]
It is not so much the client i'm worried about: There it is just adding more clients behind a load balancer. I was warned that i should not use ext3 for /home, as you are limited to 30,000 subdirs. For ext4 i know that limit is raised to 60,000. Is there a known ceiling for btrfs?
Dunno, but jfs and xfs don't have one.
I don't think ReiserFS has one but that's moot. What's the issue here? Resisting a mkdir resource exhaustion DOS attack? Lets see, if you don't let users do that and only limit them the the very basic subdirectories they are given then that gives you 10,000+ users. I suspect that "wide, not deep" might be a problem. Perhaps quotas are needed. Of course it all depends on context. One client certainly had more then 25,000 seats but (a) they weren't all on one machine and (b) there were many shared resources oriented towards departments and projects. A real system 'grows' and you need to be flexible enough to change tactics as it grows. Don't expect a paper solution to be valid a couple of years from now. If what they are saying about "post-PC" http://www.zdnet.com/thorsten-heins-the-only-exec-in-the-mobile-biz-that-get... and cloud makes any sense then who knows what you'll be dealing with. Dealing with things dynamically might also involve doing what I often do and that is moving a directory tree to a new FS. If your mount-on-demand mechanism works (e it from systemd, autofs or whatever) then there's no reason that subtrees need to be mounted all in one go or even have then all on one server. I can't tell you details since its going to vary wildly with context, but you might, for example, have all the /home on one server but the ~/Documents/ spread around a number of machines. -- December 32, 1999: We're pleased to report no Y2K failures! -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, May 15, 2013 at 5:25 AM, Hans Witvliet <suse@a-domani.nl> wrote:
It is not so much the client i'm worried about: There it is just adding more clients behind a load balancer. I was warned that i should not use ext3 for /home, as you are limited to 30,000 subdirs. For ext4 i know that limit is raised to 60,000. Is there a known ceiling for btrfs?
Quoting wiki "A directory can have at most 31998 subdirectories, because an inode can have at most 32000 links." So that's 30,000 immediate subdirectories of a single directory, not for the tree of sub-directories. You can go as deep as you want, but only 31998 wide! I have never seen /home (or the windows equivalent) with more than a few thousand user directories. Are you sure you need to do that. I would hope you would find a way to break them into groups. /home/sales/*, /home/engineering/*, etc. or /home/a-e/* /home/f-m/*, etc. If you plan to have a single directory with 30K+ subdirectories, you need to do some performance testing of the potential filesystems to see where they fall off from a usability perspective. I've done that with NTFS and it sees significant degradation when you hit 10,000 items in a directory (folder). That's for basic files. I don't know what happens to it for 10's of thousands of sub-directories within a directory, but it can only be worse. fyi: in one of my job roles we sometimes work with millions of TIFFs sequentially named: 0000000001.tiff, 00000000002.tiff, etc. We build trees of sub-dirs based on the name. No single dir holds over 1000 tiffs typically. This is why I did the NTFS performance tests with very large directories/folders. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
From: Greg Freemyer <greg.freemyer@gmail.com> To: opensuse@opensuse.org Subject: Re: [opensuse] home dirs Date: Wed, 15 May 2013 12:19:27 -0400 On Wed, May 15, 2013 at 5:25 AM, Hans Witvliet <suse@a-domani.nl> wrote:
It is not so much the client i'm worried about: There it is just adding more clients behind a load balancer. I was warned that i should not use ext3 for /home, as you are limited to 30,000 subdirs. For ext4 i know that limit is raised to 60,000. Is there a known ceiling for btrfs?
Quoting wiki "A directory can have at most 31998 subdirectories, because an inode can have at most 32000 links." So that's 30,000 immediate subdirectories of a single directory, not for the tree of sub-directories. You can go as deep as you want, but only 31998 wide! I have never seen /home (or the windows equivalent) with more than a few thousand user directories. Are you sure you need to do that. I would hope you would find a way to break them into groups. /home/sales/*, /home/engineering/*, etc. or /home/a-e/* /home/f-m/*, etc. If you plan to have a single directory with 30K+ subdirectories, you need to do some performance testing of the potential filesystems to see where they fall off from a usability perspective. I've done that with NTFS and it sees significant degradation when you hit 10,000 items in a directory (folder). That's for basic files. I don't know what happens to it for 10's of thousands of sub-directories within a directory, but it can only be worse. -----Original Message----- Hi Greg, Your quote is about btrfs? Well, i need** it wide, not deep... For my org, all the uid are 8 random alphanumerical characters and between 60k and 70k users, with no respect of department or so. To avoid some illogical from the past, i'll have to assume something above the 64K ceiling ;-) ** need is a big word, currently we use subdirs based upon the latest character (A-Z0-9) of their uid, so 36 subdirs. But is something i would like to avoid... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Wed, May 15, 2013 at 6:01 PM, Hans Witvliet <suse@a-domani.nl> wrote:
From: Greg Freemyer <greg.freemyer@gmail.com> To: opensuse@opensuse.org Subject: Re: [opensuse] home dirs Date: Wed, 15 May 2013 12:19:27 -0400
On Wed, May 15, 2013 at 5:25 AM, Hans Witvliet <suse@a-domani.nl> wrote:
It is not so much the client i'm worried about: There it is just adding more clients behind a load balancer. I was warned that i should not use ext3 for /home, as you are limited to 30,000 subdirs. For ext4 i know that limit is raised to 60,000. Is there a known ceiling for btrfs?
Quoting wiki "A directory can have at most 31998 subdirectories, because an inode can have at most 32000 links."
So that's 30,000 immediate subdirectories of a single directory, not for the tree of sub-directories. You can go as deep as you want, but only 31998 wide!
I have never seen /home (or the windows equivalent) with more than a few thousand user directories. Are you sure you need to do that. I would hope you would find a way to break them into groups. /home/sales/*, /home/engineering/*, etc. or /home/a-e/* /home/f-m/*, etc.
If you plan to have a single directory with 30K+ subdirectories, you need to do some performance testing of the potential filesystems to see where they fall off from a usability perspective. I've done that with NTFS and it sees significant degradation when you hit 10,000 items in a directory (folder). That's for basic files. I don't know what happens to it for 10's of thousands of sub-directories within a directory, but it can only be worse.
-----Original Message-----
Hi Greg, Your quote is about btrfs?
Well, i need** it wide, not deep... For my org, all the uid are 8 random alphanumerical characters and between 60k and 70k users, with no respect of department or so. To avoid some illogical from the past, i'll have to assume something above the 64K ceiling ;-)
** need is a big word, currently we use subdirs based upon the latest character (A-Z0-9) of their uid, so 36 subdirs. But is something i would like to avoid...
I was talking about ext3. I was curious why there would be a 30,000 sub-directory limit. If you want 70,000+ sub-directories in one directory I would definitely do some performance testing. Just because a FS may support it doesn't mean it will be efficient. A new thread to discuss just that issue is something I would do in your shoes. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----Original Message----- From: Greg Freemyer <greg.freemyer@gmail.com> To: opensuse@opensuse.org Subject: Re: [opensuse] home dirs Date: Wed, 15 May 2013 21:39:42 -0400 On Wed, May 15, 2013 at 6:01 PM, Hans Witvliet <suse@a-domani.nl> wrote:
From: Greg Freemyer <greg.freemyer@gmail.com> To: opensuse@opensuse.org Subject: Re: [opensuse] home dirs Date: Wed, 15 May 2013 12:19:27 -0400
On Wed, May 15, 2013 at 5:25 AM, Hans Witvliet <suse@a-domani.nl> wrote:
It is not so much the client i'm worried about: There it is just adding more clients behind a load balancer. I was warned that i should not use ext3 for /home, as you are limited to 30,000 subdirs. For ext4 i know that limit is raised to 60,000. Is there a known ceiling for btrfs?
Quoting wiki "A directory can have at most 31998 subdirectories, because an inode can have at most 32000 links."
So that's 30,000 immediate subdirectories of a single directory, not for the tree of sub-directories. You can go as deep as you want, but only 31998 wide!
I have never seen /home (or the windows equivalent) with more than a few thousand user directories. Are you sure you need to do that. I would hope you would find a way to break them into groups. /home/sales/*, /home/engineering/*, etc. or /home/a-e/* /home/f-m/*, etc.
If you plan to have a single directory with 30K+ subdirectories, you need to do some performance testing of the potential filesystems to see where they fall off from a usability perspective. I've done that with NTFS and it sees significant degradation when you hit 10,000 items in a directory (folder). That's for basic files. I don't know what happens to it for 10's of thousands of sub-directories within a directory, but it can only be worse.
-----Original Message-----
Hi Greg, Your quote is about btrfs?
Well, i need** it wide, not deep... For my org, all the uid are 8 random alphanumerical characters and between 60k and 70k users, with no respect of department or so. To avoid some illogical from the past, i'll have to assume something above the 64K ceiling ;-)
** need is a big word, currently we use subdirs based upon the latest character (A-Z0-9) of their uid, so 36 subdirs. But is something i would like to avoid...
I was talking about ext3. I was curious why there would be a 30,000 sub-directory limit. If you want 70,000+ sub-directories in one directory I would definitely do some performance testing. Just because a FS may support it doesn't mean it will be efficient. A new thread to discuss just that issue is something I would do in your shoes. Greg -----Original Message----- As said, i did some simple tests: Results are attached. I just wanted to know _if_ that many subdirs were supported, and what the I-node overhead would be. Before things start living a life of their own, let me indicate that test (creating subdirs, deleting subdirs) was done on an idle HP-laptop with as I7-cpu on openSUISE_12.1. Absolute numbers may not be relevant to you, but when doing the same test, you should see same relative results.... (i also did other tests with thousands LVM's filled and getting mounted, but that's beyound the thread ;-) What surprised me was that while ext3 (as expected) can not hold 60K subdirs, it predescessor, ext2, had no problems with it, just as ext4. Furthermore xfs, reiser and btrfs go upto a million subdirs. Also these three prove to be way faster than ext234. Finally, vfat proved to be a PITA, no surprise either ;-) As reiser is (sort of) thing of the past, best results gave XFS and btrfs. For both of them their are pro's and con's. As said by lot of others, there are many,many other aspects to consider. hw
Hans -- ...and then Hans Witvliet said... % % Hi all, Hiya! % % I was wondering, there are several ways to separate the machine where % storage for home directories are provided from the place where it is ... % % Point that kept me awake, is the scalability: it might very well work % for ten or hundred users, but what when confronted with 1000, 10.000 or % more users? % ... % At one end of the spectrum you can have a single NFS/SMB server % exporting the whole /home. While at the other end you could export each % user's home directory individually. % ... % On purpose i leave out a number of elementary items, like the storage % itself, network itself and geographically redundancy. % % Any admin around here confronted with likewise questions? % If there are any rules-of-thumbs, i'd like to know, as it kept me awake % all night ;-) [snip] If you're going to leave out geography and redundancy, then your problem becomes both simpler (only one source) and harder (one very busy source). And, yes, when one adds a few more zeros then things can change quickly. You haven't said what your storage platform is, and that makes a difference. Lots of high-end NAS frames can handle what you want; I'm familiar with NetApp Filers, but there are others as well. There's nothing that says a SuSE machine can't also serve out that much content, but you may be reinventing the wheel to develop and manage all of those exports, and if you're seriously going to be pushing that much traffic then either a little extra cost for a high-end NAS frame won't be an issue or someone is paying you to write high-end-capable management for the appliance you're designing, right? :-) In general, as already noted by others, the automounter is your friend and you probably can actually manage to have a few hundred mounts on your client with no problem; if it can handle the load of that many users doing stuff, then the mounts themselves shouldn't be a factor, and they'll fall away after an inactivity timeout. So that's where I'd start. And, finally, the one thing I haven't yet seen mentioned is to dereference your mounts by a level. Put everyone's home dir under the first two chars of the login, or the location & grade, or a three-digit hash of the name, or whatever -- and then mount those upper dirs with dozens of users beneath instead of the dozens of users directly. If you analyze carefully in advance and have a sufficient collision rate in your hashing function, you'll distinctly reduce the number of mounts on each client. You can afford even a complex or time-consuming hash because you only have to do it once per account. I'd love to see an analysis and summary of your results! HTH & HAND :-D -- David T-G See http://justpickone.org/davidtg/email/ See http://justpickone.org/davidtg/tofu.txt
Hans Witvliet said the following on 05/14/2013 03:24 AM:
I was wondering, there are several ways to separate the machine where storage for home directories are provided from the place where it is needed. This is needed in case you have multiple (virtual) desktop machines, where a number of people could log in.
You really ought to look at the Linux Terminal Server Project. They have been doing that for a long while and know all the permutations. Yes you'll have people here suggest various ways, automounter, nfs, cifs and more, the LTSP has "Been There Done That" and rung all the changes. They have a "fire and forget" package and various customizations and lots and lots of advice based on actually practice rather than theory. http://www.ltsp.org/ Its sort of "how thin do you want your clients to be?" I once tried the LXE route based on what comes with Suse and after a while I fund it was just too fiddly to set everything up so I turned to LTSP and "It Just Works". As I say, lots of people here will tell you how to do it with Suse. They are 100% correct. You will find it a "learning experience" as I did. Another project I worked on might be of interest. Essentially is was a Linux version of CITRIX. The technology is more often called "VDI' In effect is the 21st century version of X-Terminals. Back years ago USENIX conferences would have a terminal room with lots of X-terminals where you could 'call home'. These were X display machines, the X version of "dumb terminals". In case you don't know, the X protocol is the other way round fro the way MS-Windows works. The display engine/terminal is just that. The program runs 'remotely' and sends the display codes over a comms line. It could be a TCP link or just a 1200 baud modem link. You can google for VDI and find out more about the various ways this has been done. Like the LTSP there are variations n how much is remote and how much is local. One extreme is, as you hint, complete virtual machines doing everything except display. That being said, there are hints that the way systemd is heading will support "multiple seats". Whether that means multiple video cards, keyboards and mice plugged into a single box I don't know. I do know that for a long time UNIX has been doing on one box, possibly chrooted, what it takes many virtual instances of MS-Windows to do: dns, dhcp, smtp and more -- and all without needing virtualization. So it might be that the 'multi-seat' is really a GUI version of what we were doing with PDP-11s back in the 70s and 80s, that is having many terminal plugged in to one box and many people logged in simultaneously and making use of the multi-processing and process separation capabilities of UNIX. Now, with Linux and things like CGROUPS we have better control over that separation and resource management than we ever had with the PDP-11 and the VAX :-) And that kind of management is where the facilities that systemd is addressing is why the multi-seat looks so interesting. -- I have never made but one prayer to God, a very short one: "O Lord, make my enemies ridiculous." And God granted it. --Voltaire -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward wrote:
That being said, there are hints that the way systemd is heading will support "multiple seats". Whether that means multiple video cards, keyboards and mice plugged into a single box I don't know. I do know that for a long time UNIX has been doing on one box, possibly chrooted, what it takes many virtual instances of MS-Windows to do: dns, dhcp, smtp and more -- and all without needing virtualization.
So it might be that the 'multi-seat' is really a GUI version of what we were doing with PDP-11s back in the 70s and 80s, that is having many terminal plugged in to one box and many people logged in simultaneously and making use of the multi-processing and process separation capabilities of UNIX.
c't ran an article on such a Linux multi-seat setup some years back, possibly more than 6-7 years ago. AFAIR, without virtualization.
Now, with Linux and things like CGROUPS we have better control over that separation and resource management than we ever had with the PDP-11 and the VAX :-)
You should have moved up to real operating systems :-) MVS and TSO never had any problems with that. (it was said however that 400 users pressing Enter at the same time would kill a 3090-400J). -- Per Jessen, Zürich (18.8°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Per Jessen wrote:
Anton Aylward wrote:
That being said, there are hints that the way systemd is heading will support "multiple seats". Whether that means multiple video cards, keyboards and mice plugged into a single box I don't know. I do know that for a long time UNIX has been doing on one box, possibly chrooted, what it takes many virtual instances of MS-Windows to do: dns, dhcp, smtp and more -- and all without needing virtualization.
So it might be that the 'multi-seat' is really a GUI version of what we were doing with PDP-11s back in the 70s and 80s, that is having many terminal plugged in to one box and many people logged in simultaneously and making use of the multi-processing and process separation capabilities of UNIX.
c't ran an article on such a Linux multi-seat setup some years back, possibly more than 6-7 years ago. AFAIR, without virtualization.
Way back in 2006: c't 10/2006, page 228 There was also a more recent one: c't 5/2013, page 158. Only 2.5 pages for setting up a 2-seat system, seems straight forward (although with Ubuntu). -- Per Jessen, Zürich (18.8°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
c't ran an article on such a Linux multi-seat setup some years back, possibly more than 6-7 years ago. AFAIR, without virtualization.
Way back in 2006: c't 10/2006, page 228 There was also a more recent one: c't 5/2013, page 158. Only 2.5 pages for setting up a 2-seat system, seems straight forward (although with Ubuntu). Not as easy as you think. You have to have a displaymanager which can allocate the various mice/keyboards and screens to each session. Only very recent releases of displaymanagers have this capability. The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy.
Hearns, John wrote:
c't ran an article on such a Linux multi-seat setup some years back, possibly more than 6-7 years ago. AFAIR, without virtualization.
Way back in 2006: c't 10/2006, page 228
There was also a more recent one: c't 5/2013, page 158. Only 2.5 pages for setting up a 2-seat system, seems straight forward (although with Ubuntu).
Not as easy as you think. You have to have a displaymanager which can allocate the various mice/keyboards and screens to each session. Only very recent releases of displaymanagers have this capability.
Back then I played a little with the setup from the first article above, it involved a kernel and an X-server patch, but I didn't have much real reason to work with it. Multi-seat has always seemed the obvious choice for call-centers or a support-desk - four desks with a quad-core box in the middle. -- Per Jessen, Zürich (19.6°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
So it might be that the 'multi-seat' is really a GUI version of what we were doing with PDP-11s back in the 70s and 80s, that is having many terminal plugged in to one box and many people logged in simultaneously and making use of the multi-processing and process separation capabilities of UNIX. Now, with Linux and things like CGROUPS we have better control over that separation and resource management than we ever had with the PDP-11 and the VAX :-) And that kind of management is where the facilities that systemd is addressing is why the multi-seat looks so interesting. I agree - I have looked at Multiseat X quite a lot. Last time I looked it wasn't quite there - I may be wrong. As you say, there is a lot of potential with multiseat- using cgroups to share the resources of a big machine with powerful graphics among a set of remote users. The contents of this email are confidential and for the exclusive use of the intended recipient. If you receive this email in error you should not copy it, retransmit it, use it or disclose its contents but should return it to the sender immediately and delete your copy. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Hearns, John wrote:
So it might be that the 'multi-seat' is really a GUI version of what we were doing with PDP-11s back in the 70s and 80s, that is having many terminal plugged in to one box and many people logged in simultaneously and making use of the multi-processing and process separation capabilities of UNIX.
Back in the late '70s, early 80's, I worked on a system running on a Data General Nova 800 with a couple of dozen users. It sure wasn't Unix or even RDOS that they were running. It was a custom system that ran only one application. It was used for sending telegrams. In addition to the operators, sitting in front of terminals, there were also several Telex lines connected for incoming traffic from that network. The system could also support up to two remote sites, with up to 4 terminals each. This computer then sent the telegrams, via several 75 b/s Baudot serial lines to the main computer. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
James Knott wrote:
Hearns, John wrote:
So it might be that the 'multi-seat' is really a GUI version of what we were doing with PDP-11s back in the 70s and 80s, that is having many terminal plugged in to one box and many people logged in simultaneously and making use of the multi-processing and process separation capabilities of UNIX.
Back in the late '70s, early 80's, I worked on a system running on a Data General Nova 800 with a couple of dozen users. It sure wasn't Unix or even RDOS that they were running. It was a custom system that ran only one application. It was used for sending telegrams. In addition to the operators, sitting in front of terminals, there were also several Telex lines connected for incoming traffic from that network. The system could also support up to two remote sites, with up to 4 terminals each. This computer then sent the telegrams, via several 75 b/s Baudot serial lines to the main computer.
Yes, but the multi-user part of it is not the issue, nor even multiple X-servers. It's multiple physical screens, keyboards & mice that are the 'novel' part of the equation. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Dave Howorth said the following on 05/14/2013 12:07 PM:
James Knott wrote:
Hearns, John wrote:
So it might be that the 'multi-seat' is really a GUI version of what we were doing with PDP-11s back in the 70s and 80s, that is having many terminal plugged in to one box and many people logged in simultaneously and making use of the multi-processing and process separation capabilities of UNIX.
Back in the late '70s, early 80's, I worked on a system running on a Data General Nova 800 with a couple of dozen users. It sure wasn't Unix or even RDOS that they were running. It was a custom system that ran only one application. It was used for sending telegrams. In addition to the operators, sitting in front of terminals, there were also several Telex lines connected for incoming traffic from that network. The system could also support up to two remote sites, with up to 4 terminals each. This computer then sent the telegrams, via several 75 b/s Baudot serial lines to the main computer.
Yes, but the multi-user part of it is not the issue, nor even multiple X-servers. It's multiple physical screens, keyboards & mice that are the 'novel' part of the equation.
Maybe; maybe not. If they are remote terminals, X-terminals (see http://en.wikipedia.org/wiki/X_terminal and as it says "Not to be confused with xterm or other terminal emulators running under X". See also http://searchnetworking.techtarget.com/definition/X-terminal) In the limiting case there are implementations of the X Display Server that run on PCs. You are, obviously, familiar with the ones running under Linux :-) (Someone tell us briefly about XDMCP) But there are also versions for MS-Windows. Somewhere I still have a copy of the old Hummingbird Connectivity Suite which included the Exceed X Display Server for the PC. http://www.softpanorama.org/Unixification/Hummingbird/index.shtml The point here is that UNIX always - wall certainly since the mid 70s - a multi-user OS. Heck the old PDP-11/45 with just a 10M disk was supporting 20-40 people at one development/documentation site I worked back in 81/82 before we got a VAX. Yes that was with VT-100 grade terminals (OK, Wyse-60s), but offloading the GUI-ness to PCs (see above) ... I seem to recall Intel advertising the early 8086 chip as being as powerful as the PDP-11. All of which gets back to the LTSP model. or ... http://www.tldp.org/HOWTO/XDMCP-HOWTO/ -- He who stops being better stops being good. - Oliver Cromwell -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward said the following on 05/14/2013 12:40 PM:
Dave Howorth said the following on 05/14/2013 12:07 PM:
James Knott wrote:
Hearns, John wrote:
So it might be that the 'multi-seat' is really a GUI version of what we were doing with PDP-11s back in the 70s and 80s, that is having many terminal plugged in to one box and many people logged in simultaneously and making use of the multi-processing and process separation capabilities of UNIX.
Back in the late '70s, early 80's, I worked on a system running on a Data General Nova 800 with a couple of dozen users. It sure wasn't Unix or even RDOS that they were running. It was a custom system that ran only one application. It was used for sending telegrams. In addition to the operators, sitting in front of terminals, there were also several Telex lines connected for incoming traffic from that network. The system could also support up to two remote sites, with up to 4 terminals each. This computer then sent the telegrams, via several 75 b/s Baudot serial lines to the main computer.
Yes, but the multi-user part of it is not the issue, nor even multiple X-servers. It's multiple physical screens, keyboards & mice that are the 'novel' part of the equation.
Maybe; maybe not. If they are remote terminals, X-terminals
Sorry, perhaps I wasn't clear. If Dave's concern is plugging more boards into PC chassis, then I think that is a wrong-headed approach to "multi-seat". Even small towers don't have that many spare slots :-) And ask for rack mounted servers and 'blades' - there's little hope! But display are ubiquitous. Even apart from phones an tablets (if you can run X though a web browser (as opposed to running an application though a web browser)) we have landfill overflowing with old PCs and screens all of which are quite capable of running the not very demanding software needed for an X display. After all, X is not Windows. The reality is you can make a graphics card that does most of what X requires since all X is is the display. The computation is done on the computer, not on the display. Heck, the junk out of my Closet of Anxieties or a $20 "special" from the Salvation Army Thrift Store -- hey, wow, they now have flat screen for the gouging price of $40[1]. I see USB mice for $2.99 and keyboard for $4.99. Sorry, what was that? "New Equipment" and "warranty"? Look, you want a warranty on your toilet paper too? At those prices you can fill your closet with spares a number of times over for the price of one new warrantied set up. This isn't rocket science. The smarts are on the compute server. [1] Come on, I can get a 19" new at local outlets for under $100! -- "It seemed to me," said Wonko the Sane, "that any civilization that had so far lost its head as to need to include a set of detailed instructions for use in a package of toothpicks, was no longer a civilization in which I could live and stay sane." -- Douglas Adams' _So Long, And Thanks For All The Fish_ -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 5/14/2013 10:19 AM, Anton Aylward wrote:
After all, X is not Windows. The reality is you can make a graphics card that does most of what X requires since all X is is the display. The computation is done on the computer, not on the display.
True, but X (even the lightweight versions) is a bandwidth hog. I'd be leery of doing that on any scale over a wide area network, (I've done it, and it sucks). Its ok on a campus network, but its sucking way more bandwidth than having your own Linux installation on a local machine with files hosted on a network file server. Same problem with Windows Terminal server. Doesn't scale well over long distances. -- _____________________________________ ---This space for rent--- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
John Andersen said the following on 05/14/2013 01:50 PM:
On 5/14/2013 10:19 AM, Anton Aylward wrote:
After all, X is not Windows. The reality is you can make a graphics card that does most of what X requires since all X is is the display. The computation is done on the computer, not on the display.
True, but X (even the lightweight versions) is a bandwidth hog.
I'd be leery of doing that on any scale over a wide area network, (I've done it, and it sucks). Its ok on a campus network, but its sucking way more bandwidth than having your own Linux installation on a local machine with files hosted on a network file server.
Same problem with Windows Terminal server. Doesn't scale well over long distances.
First: What do you mean by 'scale'? The LTSP discuses this. Second: plugging more cards into a chassis doesn't scale well over distance either, unless you have a dedicated rack, in which case we're back to the CITRIX situation, and even then there are still regular PCs at the client end. Contemporary to the X-Terminals of the 80s/90s was stuff like LanTastic and NOVEL LAN. In many ways they were pretty amazing. I saw one LanTastic setup that was using something that seemed barely ahead of a sting and tin cans. Strangely they used pretty thin clients, MS-DOS and early MS-Windows, but it seemed that instead of just remote home folders they downloaded the binaries over the LAN as well. Perhaps my memory has slipped. I do recall reading that as LAN protocols went they were very efficient, but as Mike Padlipsky pointed out in 'Elements of Programming Style', they are not scalable - they are not routable. TCP/IP carries a bit of an overhead by comparison but it is routable - which means it can work over long distances. The scale-by-volume is a separate problem and there have bee efforts to address that for X. Many end up as VNC, which gets back to the CITRIX model: you have a display mechanism on the server and remote emulation of the mouse and keyboard. It may not be a real display in that there is a graphics card on the server dedicated to the channel; we know about framebuffers :-) BUT it does get back to the "how thin do you want it to be" remote client that is the display in front of the end use, the one he or she is looking at and laying hands on the keyboard and mouse. Which gets back to my point about Landfill and Salvation Army Specials. Yes you can load it up with a nice fast graphics card, but if that is faster than your comms channel its a waste. -- "Far out in the uncharted backwaters of the unfashionable end ofthe western spiral arm of the Galaxy lies a small, unregarded yellow sun. Orbiting this at a distance of roughly ninety-eight million miles is an utterly insignificant little blue-green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea." -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 5/14/2013 1:50 PM, John Andersen wrote:
On 5/14/2013 10:19 AM, Anton Aylward wrote:
After all, X is not Windows. The reality is you can make a graphics card that does most of what X requires since all X is is the display. The computation is done on the computer, not on the display. True, but X (even the lightweight versions) is a bandwidth hog.
I'd be leery of doing that on any scale over a wide area network, (I've done it, and it sucks). Its ok on a campus network, but its sucking way more bandwidth than having your own Linux installation on a local machine with files hosted on a network file server.
Same problem with Windows Terminal server. Doesn't scale well over long distances.
Hmm. In the early-mid 90's a colleague was logging into our network from home over a 14.4k line, running x on his home pc. He said it was not full speed (we were using HP-PA machines, but was quite comfortable. Of course, x wasn't complete at that time. Maybe it took up more bandwidth as it grew? John Perry -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward wrote:
Anton Aylward said the following on 05/14/2013 12:40 PM:
Dave Howorth said the following on 05/14/2013 12:07 PM: [big random snip]
Maybe; maybe not. If they are remote terminals, X-terminals
Sorry, perhaps I wasn't clear.
If Dave's concern is plugging more boards into PC chassis, then I think that is a wrong-headed approach to "multi-seat".
That is usually what is meant by "multi-seat" though. For four seats, he wouldn't need more than two graphics cards. There are even cards with up to 6 separate outputs. (not sure they would work for multi-seating though).
Even small towers don't have that many spare slots :-) And ask for rack mounted servers and 'blades' - there's little hope!
You wouldn't want to sit near a blade center either. :-)
But display are ubiquitous. Even apart from phones an tablets (if you can run X though a web browser (as opposed to running an application though a web browser)) we have landfill overflowing with old PCs and screens all of which are quite capable of running the not very demanding software needed for an X display. After all, X is not Windows. The reality is you can make a graphics card that does most of what X requires since all X is is the display. The computation is done on the computer, not on the display.
It doesn't satisfy the key objectives for multi-seating - reduced space and power consumption. Thin terminals do for many jobs, unless you occasionally want to run something more demanding. Also, are four thin terminals more expensive than one box with four graphics cards/outputs? -- Per Jessen, Zürich (17.3°C) http://www.dns24.ch/ - free DNS hosting, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Anton Aylward wrote:
In the limiting case there are implementations of the X Display Server that run on PCs. You are, obviously, familiar with the ones running under Linux :-) (Someone tell us briefly about XDMCP) But there are also versions for MS-Windows. Somewhere I still have a copy of the old Hummingbird Connectivity Suite which included the Exceed X Display Server for the PC. http://www.softpanorama.org/Unixification/Hummingbird/index.shtml
I often use XDMCP to connect to my other computers. I have also used Xming on Windows.
The point here is that UNIX always - wall certainly since the mid 70s - a multi-user OS. Heck the old PDP-11/45 with just a 10M disk was supporting 20-40 people at one development/documentation site I worked back in 81/82 before we got a VAX. Yes that was with VT-100 grade terminals (OK, Wyse-60s), but offloading the GUI-ness to PCs (see above) ... I seem to recall Intel advertising the early 8086 chip as being as powerful as the PDP-11.
I also recall claims the 386 was as powerful as the VAX 11/780 CPU. It was around that time I decided my days as a computer tech, maintaining mini-computers, were numbered. Back then, almost all the computer systems I worked on were for message switching, what we now call "e-mail". -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
participants (12)
-
Anton Aylward
-
Dave Howorth
-
David T-G
-
Greg Freemyer
-
Hans Witvliet
-
Hearns, John
-
James Knott
-
John Andersen
-
John Perry
-
lynn
-
Matthias Praunegger
-
Per Jessen