Hi. I want to mount our server's /usr directory on the clients so that they can use scribus and inkscape. I have tried this with our amd server and 1024 Mb ram but it's too slow to be able to do anything with 2 let alone 20 clients. The user's /home directories are mounted from the same box. Could someone give me the specification for a server to be able to do what I want so that it would run at the same speed as from the client's PII 450's? Thanks, Steve.
I would upgrade the server to something with a lot more memory and then launch the apps with X11 forwarding. Thus not needing a NFS server. To launch it you just do a "ssh <server>" then launch the app with "<appname> &". If there is a display problem you might need to include a "-x" with your ssh. So "ssh -x <server>". Brad Dameron Systems Administrator SeaTab Software www.seatab.com On Tue, 2004-12-28 at 13:59, steve wrote:
Hi.
I want to mount our server's /usr directory on the clients so that they can use scribus and inkscape. I have tried this with our amd server and 1024 Mb ram but it's too slow to be able to do anything with 2 let alone 20 clients. The user's /home directories are mounted from the same box.
Could someone give me the specification for a server to be able to do what I want so that it would run at the same speed as from the client's PII 450's?
Thanks, Steve.
On Tuesday 28 December 2004 23:26, Brad Dameron wrote:
I would upgrade the server to something with a lot more memory and then launch the apps with X11 forwarding. Thus not needing a NFS server. To launch it you just do a "ssh <server>" then launch the app with "<appname> &". If there is a display problem you might need to include a "-x" with your ssh. So "ssh -x <server>".
THanks. I'd never thought of that. Is it quicker than using nfs? I'm going to try it anyway. But our students expect to click an icon on the kde desktop to launch an application. Is that possible within your scheme? I can ssh -X to the server no problem but I have to do it in a terminal window. Cheers. Steve
You can setup a SSH public key (http://www.noah.org/ssh/publickey/) so there is no password needed. Then just setup a icon as follows: ssh -X <username>@<host> <program> Example: ssh -X testuser@192.168.1.20 yast2 This will then connect and launch the yast2 app remotely. I use this a lot to update my servers remotely. You can use this to execute scripts remotely as well. As long as the APP does a X11 display it will then forward to the local screen. And yes it can be quicker. Since all the work is being offloaded to the server. The client is just displaying it. However note that anything done in the app is thus stored on the server. This can be nice in that you can set it up so that a user can go to any machine and execute the APP and still have their data saved. If this is the case you would want to setup multiple users on the server so that each client connects and executes the APP as that user. Brad Dameron Systems Administrator SeaTab Software www.seatab.com On Tue, 2004-12-28 at 23:19, steve wrote:
On Tuesday 28 December 2004 23:26, Brad Dameron wrote:
I would upgrade the server to something with a lot more memory and then launch the apps with X11 forwarding. Thus not needing a NFS server. To launch it you just do a "ssh <server>" then launch the app with "<appname> &". If there is a display problem you might need to include a "-x" with your ssh. So "ssh -x <server>".
THanks. I'd never thought of that. Is it quicker than using nfs? I'm going to try it anyway. But our students expect to click an icon on the kde desktop to launch an application. Is that possible within your scheme? I can ssh -X to the server no problem but I have to do it in a terminal window.
Cheers. Steve
On Tuesday 28 Dec 2004 22:26 pm, Brad Dameron wrote:
I would upgrade the server to something with a lot more memory and then launch the apps with X11 forwarding. Thus not needing a NFS server. To launch it you just do a "ssh <server>" then launch the app with "<appname> &". If there is a display problem you might need to include a "-x" with your ssh. So "ssh -x <server>".
Having read this thread and the OP's original thread, I think there may be confusion over *where* the apps need to execute. Are you wanting to execute the apps REMOTELY on the server and display them on the clients, or have the clients load the apps from the server for LOCAL execution? Dylan
Brad Dameron Systems Administrator SeaTab Software www.seatab.com
On Tue, 2004-12-28 at 13:59, steve wrote:
Hi.
I want to mount our server's /usr directory on the clients so that they can use scribus and inkscape. I have tried this with our amd server and 1024 Mb ram but it's too slow to be able to do anything with 2 let alone 20 clients. The user's /home directories are mounted from the same box.
Could someone give me the specification for a server to be able to do what I want so that it would run at the same speed as from the client's PII 450's?
Thanks, Steve.
-- "I see your Schwartz is as big as mine" -Dark Helmet
I think there may be confusion over *where* the apps need to execute.
I don't care where it runs. I just want it to run at an acceptable speed and without me having to install it on each and every client's local disk. Both the nfs and ssh methods work but are too slow. Id' like to be able to say to my director that I'd need x-GB ram and a y-GB scsi disk. Or maybe no one has been here before and I have to buy 1024 ram chips until it works. . .:-( Does anyone have any concrete information? Or maybe one simply doesn't do things this way. Cheers and thanks for your patience and help. Steve.
* steve <mail@steve-ss.com> [12-29-04 18:27]:
I think there may be confusion over *where* the apps need to execute.
I don't care where it runs. I just want it to run at an acceptable speed and without me having to install it on each and every client's local disk.
then you *do* care where it runs .... ! -- Patrick Shanahan Registered Linux User #207535 http://wahoo.no-ip.org @ http://counter.li.org HOG # US1244711 Photo Album: http://wahoo.no-ip.org/photos
On Wednesday 29 Dec 2004 23:26 pm, steve wrote:
I think there may be confusion over *where* the apps need to execute.
I don't care where it runs. I just want it to run at an acceptable speed and without me having to install it on each and every client's local disk.
May we know why you don't want it on every local disk?
Both the nfs and ssh methods work but are too slow.
Well, they will be - especially if the users' home directories are served by the same machine. You can either have 40+ nfs mounts (2 per login - /home/<user> and /usr/...) or 20+ nfs mounts and 20+ instances of each app. And remember that if you mount /usr then lots of other things (including a fair amount of background stuff) will be loaded across the network too. Also, using the ssh method, if user1 is logged in on a client and then he ssh -X user1@server he is logged into the server too, which might lead to conflicts should both logins try to access the same file in the home directory.
Id' like to be able to say to my director that I'd need x-GB ram and a y-GB scsi disk. Or maybe no one has been here before and I have to buy 1024 ram chips until it works. . .:-(
Does anyone have any concrete information? Or maybe one simply doesn't do things this way.
Truthfully, unless you have a compelling reason to do it this way, you should really install on each client. Your students can't screw with the system (too much) after all, assuming you have everything set up correctly. Dylan
Cheers and thanks for your patience and help. Steve.
-- "I see your Schwartz is as big as mine" -Dark Helmet
May we know why you don't want it on every local disk?
Just being lazy. It's such a pain having to install a new app. on every one of our 20 clients. I've never found a satisfactory script to help so I walk around to each client and do it by hand. Or, if the lan's not busy, ssh to the clients and install without exercise.
steve <mail@steve-ss.com> writes:
Just being lazy. It's such a pain having to install a new app. on every one of our 20 clients. I've never found a satisfactory script to help so I walk around to each client and do it by hand.
I use simple scripts like #!/bin/bash set -x NODES="pi1 pi2 pi3 pi4 pi5 pi6 pi7" for node in $NODES; do ssh $node "$@" done It runs the argument on each client: # ./run_on_nodes "rpm -qa | head -1" + NODES=pi1 pi2 pi3 pi4 pi5 pi6 pi7 + ssh pi1 'rpm -qa | head -1' kdeartwork3-sound-3.2.1-53 + ssh pi2 'rpm -qa | head -1' kdeartwork3-sound-3.2.1-53 ... A better solution is the ZENworks Linux Management (http://www.novell.com/products/zenworks/linuxmanagement/quicklook.html, formerly Red Carpet). I don't use it since the full price is too high (for me). There may be cheaper solutions but I haven't had enough time to investigate them. Back to the server specification: it's up to you to do some measurements to estimate what server you need. Or you can ask a company to do it for you. We can mention some tricks (e.g. to export NFS filesystems with the async and read-only options) but it's hard to give specifications without knowing the machines you have, their load, the network load, ... -- A.M.
I can recommend cfengine if you are doing /any/ sort of routine (and esp. repetetive) maintenance on more than a couple machines. -- Carpe diem - Seize the day. Carp in denim - There's a fish in my pants! Jon Nelson <jnelson-suse@jamponi.net>
On Thursday 30 December 2004 11:51, Alexandr Malusek wrote:
steve <mail@steve-ss.com> writes:
Just being lazy. It's such a pain having to install a new app. on every one of our 20 clients. I've never found a satisfactory script to help so I walk around to each client and do it by hand.
I use simple scripts like
#!/bin/bash set -x NODES="pi1 pi2 pi3 pi4 pi5 pi6 pi7" for node in $NODES; do ssh $node "$@" done
It runs the argument on each client:
# ./run_on_nodes "rpm -qa | head -1" + NODES=pi1 pi2 pi3 pi4 pi5 pi6 pi7 + ssh pi1 'rpm -qa | head -1' kdeartwork3-sound-3.2.1-53 + ssh pi2 'rpm -qa | head -1' kdeartwork3-sound-3.2.1-53 ...
That is enough to steer me in the right direction. Thanks. It's just that working away from NT or 2000 you really have no idea how people do things anymore, a big disadvantage for leaving Microsoft safety.
Back to the server specification: it's up to you to do some measurements to estimate what server you need. Or you can ask a company to do it for you. We can mention some tricks (e.g. to export NFS filesystems with the async and read-only options) but it's hard to give specifications without knowing the machines you have, their load, the network load, ...
You are correct of course. I should give more details. We have a amd 2200 server with 1024 Mb. This is a nfs server for /home rw to 23 PII 450 clients using NIS. 9.2 is local on all disks. All clients use kdm to log into kde. Works fine so far. I've done all the async/autofs/8192 tricks I can lay my hands on. Here goes with a typical bottleneck: I have a lesson using OOo draw and we are all going to start OOo at the same time, reload the flow diagram we started last lesson save it every 5 or so minutes, cut maybe a diagram from a website in konq and paste it in. Exporting /usr and /home or using ssh to run the app on the server is, to put it politely, a joke. Unless you have only 2 clients of course where it runs perfectly. But we don't :-( Cheers, Steve.
steve <mail@steve-ss.com> writes:
We have a amd 2200 server with 1024 Mb. This is a nfs server for /home rw to 23 PII 450 clients using NIS. 9.2 is local on all disks. All clients use kdm to log into kde.
The bottleneck is usually the NFS server's disk which must handle many random-access requests. In your situation, I would probably create a RAM disk on the server and use it during lectures. Students could have their personal home directories on the server's hard disk but special accounts used during lectures would have home directories on the RAM disk. The content of the RAM disk could be initialized before each lecture. At the end of the lecture, students could copy new files into their home directories on the hard disk. In this case, IMHO, a slow IDE HD on the server would handle the load quite OK. You just need sufficient amount of RAM on the server. Companies usually use fast disk arrays with large battery powered caches. Alternatively, a different file system (for instance GPFS) may also distribute the load. But the price of these solutions may be high. Well, sometimes very high. As far as the software is concerned: If the packages are in RPM then I would install them on each client. -- A.M.
Hi, It's been a long while since I had touched NFS. I had some general questions... What's the difference between nfs-util and nfsserver? I think there was some conflict with these when I did an initial install. Are there two different NFS implementations? What NFS version is supported by open source? Do they support kerberos authentication? -- joaquin
On Wed, 2004-12-29 at 18:26, steve wrote:
I think there may be confusion over *where* the apps need to execute.
I don't care where it runs. I just want it to run at an acceptable speed and without me having to install it on each and every client's local disk.
Both the nfs and ssh methods work but are too slow. Id' like to be able to say to my director that I'd need x-GB ram and a y-GB scsi disk. Or maybe no one has been here before and I have to buy 1024 ram chips until it works. . .:-(
Does anyone have any concrete information? Or maybe one simply doesn't do things this way.
Cheers and thanks for your patience and help. Steve.
LTSP has some docs on their page about server requirements. These should be similar to the SSH requirements. I have run 12 clients on a similar server (1800 with 1024) using LTSP. The users did NOT generaly all load and save at the same time ... things were random. Under these circumstances, performance was surprisingly usable. Drive speed would be crucial with either SSH or NFS. The server above used two high quality SCSI drives in a mirror set (software RAID). Read access on RAID is faster as data can be accessed in parallel. I have never tried this ... but what would happen if you exported the /usr directory from the server and mounted it somplace else on the clients. If you added the new paths last on the clients, wouldn't they use the server only when the required libs and binary files were not local? Just a thought and it may hose things up due to lib conflicts. As either SSH of NFS is going to bite into the network, you may want to consider two nics in the server and good switches if you're not there already. Two subnets with 10 clients each. NICS and small switches are cheap these days. It's hard to see where the bottleneck is without monitoring network and server load. Admittedly the last two items were just "thinking out loud". I could not resist. This thread is about real stuff and the most fun one available to respond to ;-) Oh ... and Steve's idea of scripting access to the clients sounds like a winner too. I manualy ssh to each client now and it is a bit of a pain. For updates, I export /var/lib/YaST2/you/mnt/ from the server (rw,no_root_squash,sync). At least this way updates are only downloaded once. Please let us know what ever you decide and how it works out. This is why I subscribed here. Louis Richards
This thread is about real stuff and the most fun one available
to respond to ;-)
Recognition at last! I really think that we all get a little armchair bound at times. Going and doing it rather than ring'tfm really does work when there seems to be no fm to read.
Oh ... and Steve's idea of scripting access to the clients sounds like a winner too.
Alexandr's absolute simplistic beauty of a script. Not mine. How much time it's going to save me.
Please let us know what ever you decide and how it works out. This is why I subscribed here.
Louis Richards
Thanks Louis. It's exactly this list that has given me the confidence not to give in to commercial alternatives. I think however that many here are blessed with much nicer hardware to play with and may tend to forget the problems of the lesser fortunate. Cheers, beers all round and have a great 2005 to all who have put up with me yet again. Steve.
Þann Fimmtudagur 30 desember 2004 00:26 skrifaði steve:
Does anyone have any concrete information? Or maybe one simply doesn't do things this way.
Using NFS exports, is the preferred way in my opinion. But you'll always see less performance on a "network" oriented file system, than what you'll see on your local filesystem. If you take a simple 100Mb network, that means you'll get 10Mbytes/s data rate. That's going to be about 10 times slower than on a ATA 133 channel (That includes SATA 150). Or, using hdparm -t on my hard drive: /dev/sda: Timing buffered disk reads: 170 MB in 3.01 seconds = 56.43 MB/sec I've got a different setup, elsewhere. Where I use NFS exports of all user drives and can log into any system on the network with the user drives intact. I see no difference, when working with the computer. This is on a 100Mb/s backend. So I suggest that the problem does not lie in the NFS/SSH stuff. I haven't followed the discussion too deeply, but what "authendication" scheme are you using? What do the log files say? Do you have any errors occurring, etc? Perhaps a slow LDAP user authendication scheme? Do you experience any other networking problems, such as if you try downloading something remotely what speeds to you achieve?
Cheers and thanks for your patience and help. Steve.
My 2¢ worth, Örn
On Saturday 01 January 2005 21:05, Örn Einar Hansen wrote:
Þann Fimmtudagur 30 desember 2004 00:26 skrifaði steve:
see on your local filesystem. If you take a simple 100Mb network, that means you'll get 10Mbytes/s data rate. That's going to be about 10 times slower than on a ATA 133 channel (That includes SATA 150). Or, using hdparm -t on my hard drive:
Actually you will see less than 10Mbytes/s due to protocol overhead. A 10Mbits/s ethernet maxes out at 785kbytes/s. On that basis you should see only 7 to 8 Mbytes/s on a 100Mbits network and that will be for a sustained transfer of a large file using sftp. I tried this and got a reported 6.8Mbytes/s. Paul -- Paul Hewlett (Linux #359543) Email:`echo az.oc.evitcaten@ttelweh | rev` Tel: +27 21 852 8812 Cel : +27 72 719 2725 FAX: +27 866720563 --
Hi, Seeing these results, i wonder what one can expect with gigabit-network. Is it worthwhile the costs, as i seem to remember that the bandwith drops quickly with cable-length (cat-6)... Hans. On Sunday 02 January 2005 12:06, Paul Hewlett wrote:
On Saturday 01 January 2005 21:05, Örn Einar Hansen wrote:
Þann Fimmtudagur 30 desember 2004 00:26 skrifaði steve:
see on your local filesystem. If you take a simple 100Mb network, that means you'll get 10Mbytes/s data rate. That's going to be about 10 times slower than on a ATA 133 channel (That includes SATA 150). Or, using hdparm -t on my hard drive:
Actually you will see less than 10Mbytes/s due to protocol overhead. A 10Mbits/s ethernet maxes out at 785kbytes/s. On that basis you should see only 7 to 8 Mbytes/s on a 100Mbits network and that will be for a sustained transfer of a large file using sftp. I tried this and got a reported 6.8Mbytes/s.
Paul
Hans Witvliet wrote:
Hi,
Seeing these results, i wonder what one can expect with gigabit-network. Is it worthwhile the costs, as i seem to remember that the bandwith drops quickly with cable-length (cat-6)...
Bandwidth does not decrease with cable length. Attenuation and noise do increase with length, which will eventually cause an unusable signal. Gigabit ethernet is designed to provide 1 Gb/s over a 100 M distance. Go much beyond that, and the error rate will rise with distance.
On Sunday 02 January 2005 17:28, James Knott wrote:
Hans Witvliet wrote:
Hi,
Seeing these results, i wonder what one can expect with gigabit-network. Is it worthwhile the costs, as i seem to remember that the bandwith drops quickly with cable-length (cat-6)...
Bandwidth does not decrease with cable length. Attenuation and noise do increase with length, which will eventually cause an unusable signal. Gigabit ethernet is designed to provide 1 Gb/s over a 100 M distance. Go much beyond that, and the error rate will rise with distance.
All our 23 boxes are in the same room. It's not the hardware that's at fault. It's good at doing what it's designed for. Like running a single copy of XP. Not serving 20 copies of open OOo by nfs on SuSE. That's not what it was designed for. You can't do that with a box you buy down at the local supermarket. For 600 Euros. There are no solutions. The thread has been going for over 2 weeks now. Let's drop it. Either we cough up the cash and get a proper server with a zillion GB of ram or we suffer the overheads. Cheers, Steve.
but what
"authendication" scheme are you using? NIS
What do the log files say?
rpc.mountd: authenticated mount request from sbs2.local:941 for /home (/home) rpc.mountd: authenticated mount request from sbs14.local:771 for /usr (/usr) And hundreds of similar messages.
Do you have any errors occurring, etc?
No. Do you experience any other networking problems, such as if you try
downloading something remotely what speeds to you achieve?
The latest OOo (I think it's about 70Mb) downloads in 5 minutes from rediris in madrid. adsl is one of the few things in Spain which does what it says it does. I've come to the conclusion that you simply can't run 23 copies of open office on our amd wind tunnel and expect it to work. I'd like to try it with 512Mb per client. The cost wouldn't be too much but how do you fit 11GB into a computer with just 3 memory slots? Cheers. Steve.
On Saturday 01 January 2005 09:05, Örn Einar Hansen wrote:
Þann Fimmtudagur 30 desember 2004 00:26 skrifaði steve:
Does anyone have any concrete information? Or maybe one simply doesn't do things this way.
Using NFS exports, is the preferred way in my opinion. But you'll always see less performance on a "network" oriented file system, than what you'll see on your local filesystem. If you take a simple 100Mb network, that means you'll get 10Mbytes/s data rate. That's going to be about 10 times slower than on a ATA 133 channel (That includes SATA 150). Or, using hdparm -t on my hard drive:
/dev/sda: Timing buffered disk reads: 170 MB in 3.01 seconds = 56.43 MB/sec
I've got a different setup, elsewhere. Where I use NFS exports of all user drives and can log into any system on the network with the user drives intact.
What are 'user drives'? Jerome
I see no difference, when working with the computer. This is on a 100Mb/s backend. So I suggest that the problem does not lie in the NFS/SSH stuff. I haven't followed the discussion too deeply, but what "authendication" scheme are you using? What do the log files say? Do you have any errors occurring, etc? Perhaps a slow LDAP user authendication scheme? Do you experience any other networking problems, such as if you try downloading something remotely what speeds to you achieve?
Cheers and thanks for your patience and help. Steve.
My 2¢ worth, Örn
participants (13)
-
Alexandr Malusek
-
Brad Dameron
-
Dylan
-
Hans Witvliet
-
James Knott
-
Joaquin Menchaca
-
Jon Nelson
-
Louis Richards
-
Patrick Shanahan
-
Paul Hewlett
-
steve
-
Susemail
-
Örn Einar Hansen