well i read and still not sure so let me ask. what is a cluster? :-)
Am Sonntag, 3. Februar 2002 14:24 schrieb Landy Roman:
well i read and still not sure so let me ask.
what is a cluster? :-)
On Sun, Feb 03, 2002 at 08:24:20AM -0500, Landy Roman wrote:
what is a cluster? :-)
According to my dictionary: a number of things gathered close together in a small group, especially around a central point. Maybe not very helpfull... Are you asking about a so called Beowulf Cluster? It's a master which acts as a front to a lot of computers connected to it (slaves). This master can distribute the workload over the other computers. Normally special programs are written where each part of that program, i.e. a calculation, can be send to different slave machines. A bit like seti@home does. The outside world only sees the master. The so called cluster size, the number of slaves, can vary. IIRC the NASA has a (Linux) Beowulf cluster of more than 1000 PC's. (You could also have meant a number of other clusters which are PC related, like cluster size of a hard disk.) Regards, Cees.
i meant the more like Beowulf Cluster? On Sun, 3 Feb 2002 14:56:08 +0100 Cees van de Griend <cees-list@griend.xs4all.nl> wrote:
On Sun, Feb 03, 2002 at 08:24:20AM -0500, Landy Roman wrote:
what is a cluster? :-)
According to my dictionary: a number of things gathered close together in a small group, especially around a central point.
Maybe not very helpfull...
Are you asking about a so called Beowulf Cluster?
It's a master which acts as a front to a lot of computers connected to it (slaves). This master can distribute the workload over the other computers. Normally special programs are written where each part of that program, i.e. a calculation, can be send to different slave machines. A bit like seti@home does.
The outside world only sees the master.
The so called cluster size, the number of slaves, can vary. IIRC the NASA has a (Linux) Beowulf cluster of more than 1000 PC's.
(You could also have meant a number of other clusters which are PC related, like cluster size of a hard disk.)
Regards, Cees.
yes indeed; thanks got a pic of it but guess more reading is needed On Sun, 3 Feb 2002 16:12:05 +0100 Cees van de Griend <cees-list@griend.xs4all.nl> wrote:
On Sun, Feb 03, 2002 at 08:58:46AM -0500, Landy Roman wrote:
i meant the more like Beowulf Cluster?
Then my guess was correct. I hope this was the info you were looking for.
Regards, Cees.
Here is one definition of a cluster: http://www.pcwebopedia.com/TERM/c/cluster.html This has been a handy reference link for me; I hope you find it useful. Diane -----Original Message----- From: Landy Roman [mailto:landy@despiertapr.com] Sent: Sunday, February 03, 2002 10:15 AM To: Cees van de Griend Cc: suse-linux-e@suse.com Subject: Re: [SLE] if you dont know read or ask yes indeed; thanks got a pic of it but guess more reading is needed On Sun, 3 Feb 2002 16:12:05 +0100 Cees van de Griend <cees-list@griend.xs4all.nl> wrote:
On Sun, Feb 03, 2002 at 08:58:46AM -0500, Landy Roman wrote:
i meant the more like Beowulf Cluster?
Then my guess was correct. I hope this was the info you were looking for.
Regards, Cees.
-- To unsubscribe send e-mail to suse-linux-e-unsubscribe@suse.com For additional commands send e-mail to suse-linux-e-help@suse.com Also check the FAQ at http://www.suse.com/support/faq and the archives at http://lists.suse.com
On 3 Feb 2002, Cees van de Griend wrote:
The so called cluster size, the number of slaves, can vary. IIRC the NASA has a (Linux) Beowulf cluster of more than 1000 PC's.
Google.com also runs on a cluster of thousands (IIRC, 3000) of PC's. The advantages of PC's and Linux are very evident and they complement each other. Both are cheap (monetary-wise), flexible, and powerful. These characteristics are essential to those needing huge amounts of processing power. This is why Linux has caught on immensly in the filming industry. While workstations used to create special effects usually are "big iron" Unix machines (e.g., Irix on a SGI), rendering farms (those used for many major motion pictures such as _Titanic_ and _Lord of the Rings_) primarly use a large numbers of Linux running on cheap-ass IA32 machines. I remember reading an article where the producers chose Linux for rendering scenes in _Lord of the Rings_ because it had a price / performance ratio of 2:1 compared to anything else. -- Karol Pietrzak PGP KeyID: 3A1446A0
On Sun, Feb 03, 2002 at 11:26:34AM -0500, Karol Pietrzak wrote:
Google.com also runs on a cluster of thousands (IIRC, 3000) of PC's. The advantages of PC's and Linux are very evident and they complement each other. Both are cheap (monetary-wise), flexible, and powerful.
A minor correction. Just want to share information I have. As of July 2001, google ran 10,000 PCs spread over 4 co-locations, two on the Eastern coast, two on the Western coast. PC is an el-cheapo design (in google's sysadmin words) and the rate of failures is 52 PCs a day. They don't fix them, just replace and rebuild. This comes from my notes from BayLISA meeting of July 19, 2001 where Frank Cusack, system administrator at Google, made a presentation about Network and Machine Architecture at Google -Kastus
A minor correction. Just want to share information I have. As of July 2001, google ran 10,000 PCs spread over 4 co-locations, two on the Eastern coast, two on the Western coast. PC is an el-cheapo design (in google's sysadmin words) and the rate of failures is 52 PCs a day. They don't fix them, just replace and rebuild.
This comes from my notes from BayLISA meeting of July 19, 2001 where Frank Cusack, system administrator at Google, made a presentation about Network and Machine Architecture at Google
What kind of failures do they have? 52 PC every day looks like A LOT! 0,52% of chance for a PC to fail every day! If the chance is the same in the time, in 100 days they would replace 52% of all pcs (or there is 52% of chance to have one pc broken!). Ok, I am doing the assumption that the chance is always the same, and that is not obiously true, but it still looks like they have a lot of failures. Anyone have some further information on this? Praise
On Mon, 4 Feb 2002 15:52:29 +0100 Praise <praisetazio@tiscalinet.it> wrote:
What kind of failures do they have? 52 PC every day looks like A LOT! 0,52% of chance for a PC to fail every day! If the chance is the same in the time, in 100 days they would replace 52% of all pcs (or there is 52% of chance to have one pc broken!).
Ok, I am doing the assumption that the chance is always the same, and that is not obiously true, but it still looks like they have a lot of failures. Anyone have some further information on this?
I would guess that 52 pc's a day, out of 10,000 is not that high if they are using generic pc parts under high stress loads. Probably, alot of hard drive failures. Also, if they don't stress-test the new parts they put them in, part of that .000052 is just bad components to begin with. How many "bad returns" is typical for pc parts? -- $|=1;while(1){print pack("h*",'75861647f302d4560275f6272797f3');sleep(1); for(1..16){for(8,32,8,7){print chr($_);}select(undef,undef,undef,.05);}}
I would guess that 52 pc's a day, out of 10,000 is not that high if they are using generic pc parts under high stress loads. Probably, alot of hard drive failures. Also, if they don't stress-test the new parts they put them in, part of that .000052 is just bad components to begin with.
How many "bad returns" is typical for pc parts?
I hope that.... because if not it's a high rate of pc failure. Very high! The correct number is 0.0052:-) If you assume a constant rate of failure, they would have to replace all computers in a very short time: about 192 days. Even if you cut it to half, assuming that many pc comes already broken, you find that they replace all computer in 384 days. That looks very expensive, and I am worrying for my poor old pcs:-( Praise
On Tue, 5 Feb 2002 11:56:34 +0100 Praise <praisetazio@tiscalinet.it> wrote:
I hope that.... because if not it's a high rate of pc failure. Very high! The correct number is 0.0052:-) If you assume a constant rate of failure, they would have to replace all computers in a very short time: about 192 days.
Statistics are often misleading. I'll bet the actual figures are something like " on any given day, 52 out of 10000 pc's are down". That would include broken pc's from the day before that wern't fixed yet, sort of a carry over. It dosn't mean that 52 "previously unbroken pc's" dropped off for the first time. If I had a fleet of 10000 rental cars, and if on any given day, they all worked except for 52, I would say I'm doing pretty good. -- $|=1;while(1){print pack("h*",'75861647f302d4560275f6272797f3');sleep(1); for(1..16){for(8,32,8,7){print chr($_);}select(undef,undef,undef,.05);}}
participants (8)
-
Cees van de Griend
-
Daniel Fichtner
-
Diane
-
Karol Pietrzak
-
Konstantin (Kastus) Shchuka
-
Landy Roman
-
Praise
-
zentara