Over? We had a comm problem at work where some communications hardware upstream of our unix server was translating EBCDIC when it shouldn't. Our financial transaction processor (circa 2003) does the translation itself. oh man.... you have my sympathies. I guess I meant that *for me* they are over.... :-))
EBCDIC won't die as long as there are financial institutions, because those cheap sun-of-a-guns never throw anything away. It is rare for them to decommission a mainframe until it spontaneously combusts. Well, they won't run forever, nothing ever does. You have to remember that during the 60s and 70s IBM was the mainframe monopoly monster... akin to what M$ has become today....you did it IBMs way or you didn't play... thank goodness those days are O V E R also. You remember 2001 A Space O... HAL , was a word play on IBM (one letter off).... "I'm sorry Dave, I'm afraid I can't do that..." .... the only way Dave could shutdown HAL was to smash
On Tuesday 19 September 2006 20:31, you wrote: through the airlock, climb into the neumonic memory bodily and start unplugging boards... ;-))
At work I understand they're experimenting with LPARs and running linux on some of their newer, big iron. It might even be suse, but us unix dudes aren't allowed to play on the mainframes.
Yes, but what a waste. Distributed processing (Beowulf style) is the future of digital computing (me thinks)... that what many of us are experimenting /researching these days. Why have one giant piece of, ah-- hardware, running multiple copies of linux... or anything else... when you can distribute the machine across a network of Beo-clusters for fractions of dollars using off-the-shelf parts? As a for-instance, Big Idea company... you know, the Vegietales people, made their Jonah movie by networking 500 linux machines together (all rack mounted pcs) rendering day and night for a couple of months... they didn't buy a mainframe with 500 LPARs.... :) -- Kind regards, M Harris <><
M Harris wrote:
At work I understand they're experimenting with LPARs and running linux on some of their newer, big iron. It might even be suse, but us unix dudes aren't allowed to play on the mainframes. Yes, but what a waste. Distributed processing (Beowulf style) is the future of digital computing (me thinks)... that what many of us are experimenting /researching these days. Why have one giant piece of, ah-- hardware, running multiple copies of linux... or anything else... when you can distribute the machine across a network of Beo-clusters for fractions of dollars using off-the-shelf parts? As a for-instance, Big Idea company... you know, the Vegietales people, made their Jonah movie by networking 500 linux machines together (all rack mounted pcs) rendering day and night for a couple of months... they didn't buy a mainframe with 500 LPARs.... :)
At a Linux conference I attended last April, an IBM rep made the point that for a given load, mainframes tend to be more energy efficient than a bunch of PCs, an important consideration these days. And since the presentation was about virtual machines, He said that it's easier to balance loads across several virtual machines, than separate boxes and that "networking" between virtual machines runs at memory speed, which is considerably faster than ethernet.
On Wednesday 20 September 2006 06:12, James Knott wrote:
As a for-instance, Big Idea company... you
know, the Vegietales people, made their Jonah movie by networking 500 linux machines together (all rack mounted pcs) rendering day and night for a couple of months... they didn't buy a mainframe with 500 LPARs.... :)
At a Linux conference I attended last April, an IBM rep made the point that for a given load, mainframes tend to be more energy efficient than a bunch of PCs, an important consideration these days. And since the presentation was about virtual machines, He said that it's easier to balance loads across several virtual machines, than separate boxes and that "networking" between virtual machines runs at memory speed, which is considerably faster than ethernet. More energy efficient??? Is he kidding? Have you ever seen the power supply rooms for a mainframe shop...? Have you ever seen the water coolers and air-conditioners needed to keep those silly things from melting? There are several factors to consider:
1) The mainframe is expensive... we're not talking just spendy here... we're talking outrageous. 2) Each lpar operates at a fraction of the speed/efficiency of a stand-alone counterpart. (not energy, computational power) 3) The mainframe is housed in a single location -duh- and if you loose the power (or anything else) you've lost everything. (all eggs in one basket is a bad plan... ancient prairie proverb) [ On the other hand: ] 1) Distributed systems can be built with off-the-shelf parts inexpensively... many science centers have done this already... 2) Distributed systems do not require air-conditioners, water coolers, etc... and can be placed strategically on the power grid so that loosing one (power, etc) doesn't bring the whole thing down. 3) Distributed systems have cross redundancy as well as shared resource... including processor resource... again, so that if you loose one machine the others just take over and everything keeps running. 4) Distributed systems (say all 1.8Ghz) running in parallel (say 500 of them) will blow the doors off of a mainframe running lpars... even using semetric multi-processing. (or some other stat pulled out of the air) The old school of thought was to take a mainframe and share it across multiple users (terminals) and there-by defray the cost of the mainframe... because nobody had personal computers!! uh... 1977. The new school of thought is to take thousands of networked super PCs (200GB dasd +, 1.8Ghz +, ) and pull them together into one gigantic networked super processor with total redundancy including processor resource. The brain (if you will) is the network. Each cell or cluster of cells function as one small redundant piece of the whole... and the whole collective body is thousands of times more powerful than the sum of its many parts... The power of a mainframe is limited (divided) across lpars... The power of a distributed mega cluster is *amplified* on the whole. ------ I have experimented with this at home... most folks have one large more or less state-of-the-art pc with the latest toys attached... it goes down... and they're in the shop for a while. My system is a distributed cluster of 14 older systems that never goes down. It is completely redundant, lightning fast, with self backup, mirroring, and processor sharing. And the cool thing is that authorized users (member of the household) can logon from one of five terminals (also nodes in the network) and access the entire cluster (or individual machines) including graphical interfaces to any of the nodes on the cluster if need be. Its fun, its fast, its reliable, and its cheap.... Now... consider multiplying this idea by 500 or 5000 or 5,000,000. The Linux copies running on an IBM mainframe is so 1970.s... common guys... this is the 21st Century... think of the Seti at home project... and you're coming closer to where I'm going here. There is a reason why Seti at home is not called Seti in a mainframe... ;-) -- Kind regards, M Harris <><
M Harris wrote:
On Wednesday 20 September 2006 06:12, James Knott wrote:
As a for-instance, Big Idea company... you
know, the Vegietales people, made their Jonah movie by networking 500 linux machines together (all rack mounted pcs) rendering day and night for a couple of months... they didn't buy a mainframe with 500 LPARs.... :) At a Linux conference I attended last April, an IBM rep made the point that for a given load, mainframes tend to be more energy efficient than a bunch of PCs, an important consideration these days. And since the presentation was about virtual machines, He said that it's easier to balance loads across several virtual machines, than separate boxes and that "networking" between virtual machines runs at memory speed, which is considerably faster than ethernet. More energy efficient??? Is he kidding? Have you ever seen the power supply rooms for a mainframe shop...? Have you ever seen the water coolers and air-conditioners needed to keep those silly things from melting? There are several factors to consider:
1) The mainframe is expensive... we're not talking just spendy here... we're talking outrageous. 2) Each lpar operates at a fraction of the speed/efficiency of a stand-alone counterpart. (not energy, computational power) 3) The mainframe is housed in a single location -duh- and if you loose the power (or anything else) you've lost everything. (all eggs in one basket is a bad plan... ancient prairie proverb) [ On the other hand: ]
Clusters tend to be in one location too.
1) Distributed systems can be built with off-the-shelf parts inexpensively... many science centers have done this already... 2) Distributed systems do not require air-conditioners, water coolers, etc... and can be placed strategically on the power grid so that loosing one (power, etc) doesn't bring the whole thing down.
Clusters tend to be in one location. Otherwise communications will kill you. Even with individual computers, heat is always a problem, and these days is severely straining AC. Water cooling can reduce problems.
3) Distributed systems have cross redundancy as well as shared resource... including processor resource... again, so that if you loose one machine the others just take over and everything keeps running. 4) Distributed systems (say all 1.8Ghz) running in parallel (say 500 of them) will blow the doors off of a mainframe running lpars... even using semetric multi-processing. (or some other stat pulled out of the air)
The old school of thought was to take a mainframe and share it across multiple users (terminals) and there-by defray the cost of the mainframe... because nobody had personal computers!! uh... 1977.
The new school of thought is to take thousands of networked super PCs (200GB dasd +, 1.8Ghz +, ) and pull them together into one gigantic networked super processor with total redundancy including processor resource.
The brain (if you will) is the network. Each cell or cluster of cells function as one small redundant piece of the whole... and the whole collective body is thousands of times more powerful than the sum of its many parts...
The power of a mainframe is limited (divided) across lpars...
The power of a distributed mega cluster is *amplified* on the whole.
------
I have experimented with this at home... most folks have one large more or less state-of-the-art pc with the latest toys attached... it goes down... and they're in the shop for a while.
My system is a distributed cluster of 14 older systems that never goes down. It is completely redundant, lightning fast, with self backup, mirroring, and processor sharing. And the cool thing is that authorized users (member of the household) can logon from one of five terminals (also nodes in the network) and access the entire cluster (or individual machines) including graphical interfaces to any of the nodes on the cluster if need be. Its fun, its fast, its reliable, and its cheap....
Now... consider multiplying this idea by 500 or 5000 or 5,000,000. The Linux copies running on an IBM mainframe is so 1970.s... common guys... this is the 21st Century... think of the Seti at home project... and you're coming closer to where I'm going here.
There is a reason why Seti at home is not called Seti in a mainframe... ;-)
IBM sells mainframes and IBM sells clusters. I suspect they know the pros and cons of each. It's more a matter of which is best for your application. In some cases mainframe, in others clusters. Clusters tend to do best, when you can have many parallel operations, mainframes, when you can't.
On Thursday 21 September 2006 1:03 am, M Harris wrote:
My system is a distributed cluster of 14 older systems that never goes down. It is completely redundant, lightning fast, with self backup, mirroring, and processor sharing. And the cool thing is that authorized users (member of the household) can logon from one of five terminals (also nodes in the network) and access the entire cluster (or individual machines) including graphical interfaces to any of the nodes on the cluster if need be. Its fun, its fast, its reliable, and its cheap....
Fun, fast, and reliable, I agree. But cheap? 14 systems at 300 watts per system (typical power supply rating) comes to 4200 watts of constant draw. That's a pretty hefty electric bill. Paul
Paul Abrahams wrote:
On Thursday 21 September 2006 1:03 am, M Harris wrote:
My system is a distributed cluster of 14 older systems that never goes down. It is completely redundant, lightning fast, with self backup, mirroring, and processor sharing. And the cool thing is that authorized users (member of the household) can logon from one of five terminals (also nodes in the network) and access the entire cluster (or individual machines) including graphical interfaces to any of the nodes on the cluster if need be. Its fun, its fast, its reliable, and its cheap....
Fun, fast, and reliable, I agree. But cheap? 14 systems at 300 watts per system (typical power supply rating) comes to 4200 watts of constant draw. That's a pretty hefty electric bill.
An average older system will not consume 300W. My new system equipped with AMDx2 4200, Areca-raid with 5 hdds uses about 220W in average. Together with an older FSC Primergy 470 (Dual PIII-500 + 3 SCSI hdd RAID) and an older PII als Firewall they are using about 400W all together. Included are 19" TFT monitor and the small fry like switch, cable modem etc. I heard the same screams in our company from my boss when the electrician told him our ups was underpowered (he assumed 400W each server). We now have a few servers more in the rack and the ups load is still safe at about 57%. Though I agree in general, a few fast machines will consume less energy than a lot of small systems. Sandy -- List replies only please! Please address PMs to: news-reply2 (@) japantest (.) homelinux (.) com
On Thursday 21 September 2006 11:02 am, Sandy Drobic wrote:
An average older system will not consume 300W. My new system equipped with AMDx2 4200, Areca-raid with 5 hdds uses about 220W in average. Together with an older FSC Primergy 470 (Dual PIII-500 + 3 SCSI hdd RAID) and an older PII als Firewall they are using about 400W all together. Included are 19" TFT monitor and the small fry like switch, cable modem etc.
Is that 220W per system or 400W for all the systems put together? The latter would be pretty surprising for 14 systems -- just 15W per system!!! And if it's 220W per system, that's about 3200W, or about 30 amps -- enough to blow a 20W breaker on most home electrical systems. I assume those boxes are not on the same circuit, then. Paul
On Thursday 21 September 2006 13:01, Paul Abrahams wrote:
On Thursday 21 September 2006 11:02 am, Sandy Drobic wrote:
An average older system will not consume 300W. My new system equipped with AMDx2 4200, Areca-raid with 5 hdds uses about 220W in average. Together with an older FSC Primergy 470 (Dual PIII-500 + 3 SCSI hdd RAID) and an older PII als Firewall they are using about 400W all together. Included are 19" TFT monitor and the small fry like switch, cable modem etc.
Is that 220W per system or 400W for all the systems put together? The latter would be pretty surprising for 14 systems -- just 15W per system!!! And if it's 220W per system, that's about 3200W, or about 30 amps -- enough to blow a 20W breaker on most home electrical systems. I assume those boxes are not on the same circuit, then.
Paul
You are confusing two different people here. Sandy has those three systems. Sandy states that is 400W for all three. It was M Harris who has the 14 systems in a distributed cluster at home. I'd like to know the power specs and usage on all of that too. Like the idea of all the speed, processing power, etc but I would not like paying the electric bill. Now if those 14 are the mini-ITx style systems versus 14 Itaniums... Stan
Paul Abrahams wrote:
On Thursday 21 September 2006 11:02 am, Sandy Drobic wrote:
An average older system will not consume 300W. My new system equipped with AMDx2 4200, Areca-raid with 5 hdds uses about 220W in average. Together with an older FSC Primergy 470 (Dual PIII-500 + 3 SCSI hdd RAID) and an older PII als Firewall they are using about 400W all together. Included are 19" TFT monitor and the small fry like switch, cable modem etc.
Is that 220W per system or 400W for all the systems put together? The latter would be pretty surprising for 14 systems -- just 15W per system!!! And if it's 220W per system, that's about 3200W, or about 30 amps -- enough to blow a 20W breaker on most home electrical systems. I assume those boxes are not on the same circuit, then.
3 systems: - Desktop/Server(VMWare) 220W - Server Primergy 470 (Dual P3) 110W - Firewall (PII) 50W - Monitor 25W - small fry (switch, telephone) 10W -------------------------------------------- all together: ~ 410W They are all behind a 500W UPS, so I know exactly how much energy my systems are consuming. I even checked independently with an energy measuring device how much each system uses.(^-^) Those are my productive systems, other test system without valuable data are not included here. Sandy -- List replies only please! Please address PMs to: news-reply2 (@) japantest (.) homelinux (.) com
On Thu, 2006-09-21 at 17:02 +0200, Sandy Drobic wrote:
I heard the same screams in our company from my boss when the electrician told him our ups was underpowered (he assumed 400W each server). We now have a few servers more in the rack and the ups load is still safe at about 57%.
Is it a non-switching UPS? Meaning that the equipment runs off the batteries all the time? If so, your UPS will last longer if the load is more than 57%. Otherwise, the batteries lose their charge capacity and will last a shorter time when power fails. We use UPS in road vehicles and have found this to be true. Our UPS suppliers have all said the same thing. An over-dimensioned UPS' batteries do not last as long. Same thing with petrol power generators. They last longer if the current taken is closer to their rated power capacity. -- Roger Oberholtzer
Roger Oberholtzer wrote:
On Thu, 2006-09-21 at 17:02 +0200, Sandy Drobic wrote:
I heard the same screams in our company from my boss when the electrician told him our ups was underpowered (he assumed 400W each server). We now have a few servers more in the rack and the ups load is still safe at about 57%.
Is it a non-switching UPS? Meaning that the equipment runs off the batteries all the time? If so, your UPS will last longer if the load is more than 57%. Otherwise, the batteries lose their charge capacity and will last a shorter time when power fails. We use UPS in road vehicles and have found this to be true. Our UPS suppliers have all said the same thing. An over-dimensioned UPS' batteries do not last as long. Same thing with petrol power generators. They last longer if the current taken is closer to their rated power capacity.
No, it's a switching UPS. Though talking about overdimensioned UPSs, a year ago, I had to buy a new one, because the old one broke down (guaranty had just run out), and APC told me they could only offer a trade-in if I took the next bigger version. Unfortunately, even the old one (8000VA) had a max load of 20%, so I felt a bit ridiculed by that offer. I went and bought a new one (5000VA) for half the offered trade-in price which came equipped with a network monitoring card. Now I can use it to measure the temperature in the server room and scream for help if the air conditioning in the server room breaks down. Here at home I use a switching 500W (750VA) UPS when short power outages became too frequent for my comfort. Sandy -- List replies only please! Please address PMs to: news-reply2 (@) japantest (.) homelinux (.) com
On Thursday 21 September 2006 08:50, Paul Abrahams wrote: > Fun, fast, and reliable, I agree. But cheap? 14 systems at 300 watts per > system (typical power supply rating) comes to 4200 watts of constant draw. > That's a pretty hefty electric bill. Yeah, but that's not how the power stacks up.... 1) I have no cathode ray tubes.... last one went to recycling center months ago. Those things are the real power hogs. 2) I have no machines with a power supply rated in excess of 250watts. Most of them use a power supply rated at less than 200watts, and several are rated at less than 150watts. 3)Modern power supplies aren't like a lightbulb... always drawing (delivering) their rated power... far from it. They are switching supplies and run at a fraction of their rating... unless of course the user adds stuff that draws more power. Switching means that the machine's power supply delivers more power as its needed (more current at the same regulated voltage) using a technology called pulse width modulation. Circuitry that senses a voltage drop increases the width of the transformer pulses and whalla... same voltage out at increased current... and increased power dissipation. All this boils down to the fact that a headless computer with no hardware except HD, MB, and Eth, (no dvd, cd, floppy, extra drives, sound cards, midi ports, display cards, etc) will run at a surprisingly small percentage of the system's power supply rating... maybe 1/3 or less. Most of my machines are drawing less power than a 60watt lightbulb. By the way, most rack mounted slim pcs are similar. My are in cases, however. 4) I have never measured this.... strange for a guy like me... so, tomorrow, I will take a reading with everyting down... and another reading with everyting up... and I'll let you know... but I can tell you this... the only time the dial spins fast on my meter is when the AirC is on... ;-) I'll bet most folks burn more power running their lights than I do running my network... we'll see. 5) The "cool" thing "literally" about Linux is that most of the time the processor is idle... run top sometime and check it out. Processors get hot (consume more power) while they are cycling... when idle they run cooler and consume less power... so most of the time the MB is drawing dissipating minimal power. Of course this isn't true when everything is clicking. :) 6) I MAY HAVE MISLED YOUS GUYS.... when I said they never go down I didn't mean they run 24x7. (for instance the whole shooting match is off-line during storms 'n such) What I meant was that we are not *down* do to hardware failure.... in other words, when I lose a drive I don't lose data and the family is not without the network while that one machine is being repaired. In fact, last week I lost two drives (on the same day, both WDs 8 years old) and the family did not lose the computer network, data, or functionality. Redundancy, backup, and alternate paths are key on modern systems I think. Bill is still stuck in the windoze mode of thinking that everyone is surfing gleefully through MSN.com and playing 3D games all day... for crying out loud most of my machines have no monitor and four of them have no wires at all except for the power cord (they are the wifi guys, the vulnerable part of the experiment). -- Kind regards, M Harris <><
On Friday 22 September 2006 12:19 am, M Harris wrote:
3) <here is the kicker> Modern power supplies aren't like a lightbulb... always drawing (delivering) their rated power... far from it. They are switching supplies and run at a fraction of their rating... unless of course the user adds stuff that draws more power. Switching means that the machine's power supply delivers more power as its needed (more current at the same regulated voltage) using a technology called pulse width modulation.
How do you tell if a power supply is of the switching variety? Is that info usually on the nameplate? When you get a power supply as part of a box, that's probably all you have to go on. Paul
Paul Abrahams wrote:
On Friday 22 September 2006 12:19 am, M Harris wrote:
3) <here is the kicker> Modern power supplies aren't like a lightbulb... always drawing (delivering) their rated power... far from it. They are switching supplies and run at a fraction of their rating... unless of course the user adds stuff that draws more power. Switching means that the machine's power supply delivers more power as its needed (more current at the same regulated voltage) using a technology called pulse width modulation.
How do you tell if a power supply is of the switching variety? Is that info usually on the nameplate? When you get a power supply as part of a box, that's probably all you have to go on.
These days, all computer power supplies and many others are switching supplies. They're far more efficient that the old linear supplies.
Paul Abrahams wrote:
On Friday 22 September 2006 12:19 am, M Harris wrote:
3) <here is the kicker> Modern power supplies aren't like a lightbulb... always drawing (delivering) their rated power... far from it. They are switching supplies and run at a fraction of their rating... unless of course the user adds stuff that draws more power. Switching means that the machine's power supply delivers more power as its needed (more current at the same regulated voltage) using a technology called pulse width modulation.
How do you tell if a power supply is of the switching variety? Is that info usually on the nameplate? When you get a power supply as part of a box, that's probably all you have to go on.
As far as I know, since late 1980s PSUs have been switching PSUs. I remember having one repaired by a tech for the computer I was using to run a BBS at the end of 1989. Cheers. -- Paranoia is simply an optimistic outlook on life.
Power supplies that are not switching types have large heavy transformers in them. These types of power supplies have not been around for a very very long time. The switching types are a lot less expensive to build and are more efficient. However, they are more difficult to repair and often the labor costs are not worth it over purchasing a new one.
Fri, 22 Sep 2006, by abrahams@acm.org:
On Friday 22 September 2006 12:19 am, M Harris wrote:
3) <here is the kicker> Modern power supplies aren't like a lightbulb... always drawing (delivering) their rated power... far from it. They are switching supplies and run at a fraction of their rating... unless of course the user adds stuff that draws more power. Switching means that the machine's power supply delivers more power as its needed (more current at the same regulated voltage) using a technology called pulse width modulation.
How do you tell if a power supply is of the switching variety? Is that info usually on the nameplate? When you get a power supply as part of a box, that's probably all you have to go on.
A 400 to 500 Watt non-switching PSU would need a transformer of about 10-15kg (~22-35lbs) for all the iron and copper, plus inductors of up to 50H, smoothing capacitors of several Farads and about 0.5 square meter of heat-sink to cool the regulator transistors etc. Your PC would probably be the size and weight of a complete DEC PDP-11 mini computer system. Theo -- Theo v. Werkhoven Registered Linux user# 99872 http://counter.li.org ICBM 52 13 26N , 4 29 47E. + ICQ: 277217131 SUSE 9.2 + Jabber: muadib@jabber.xs4all.nl Kernel 2.6.8 + See headers for PGP/GPG info. Claimer: any email I receive will become my property. Disclaimers do not apply.
On Thu, Sep 21, 2006 at 12:03:39AM -0500, M Harris wrote:
My system is a distributed cluster of 14 older systems that never goes down. It is completely redundant, lightning fast, with self backup, mirroring, and processor sharing. And the cool thing is that authorized users (member of the household) can logon from one of five terminals (also nodes in the network) and access the entire cluster (or individual machines) including graphical interfaces to any of the nodes on the cluster if need be. Its fun, its fast, its reliable, and its cheap....
Sounds interesting. You made me curious. Can you please provide some more info?
On Thursday 21 September 2006 15:15, Josef Wolf wrote:
Sounds interesting. You made me curious. Can you please provide some more info?
http://www.beowulf.org/ First, see the above link for general info regarding the concept generally.... but my idea is just a bit different as well.... consider logging onto a network instead of a machine. This is similar to the old days of logging onto a mainframe from a terminal... only there's no mainframe... only a network. If it works well the user has actually no idea which processor(s) are actually doing her work... or which memory holds her data... or which specific drives are mirroring her spreadsheets... she only knows that this virtual space in this mega network looks to her like a real stand-alone machine.... uh, virtual distributed network processing.... or, uh.... virtual distributed network hosting... I haven't decided yet. But the idea is that the "network brain" is completely redundant, auto-scaling, auto-backing, and transparent to the user. By the way... the user can logon to the network from *any* of the network's nodes and "look and feel" exactly the same.... logon in Kansas City is the same.... logon in Detroit... is the same... logon in Botswanna... is the same.... uh hate to say this... but lets suppose a scud missile hits one corner of one city and knocks out two nodes... no problem... the network shifts, replicates/mirrors, everything stays up... all except for the brain cells that died. The human brain works this way.... cells are always being cycled... dying (drinking) and being created... learning. Yet, the mind never goes down... usually. Distributed processing is much more than just processing clusters like Beowulf... only that is a good start. Distributed processing is way more by leaps and bounds than windoze file serving. I am talking about complete resource sharing among thousands of pc super comps linked together to form floating redundant virtual processing spaces that can be initiated, or monitored, or controlled, from anywhere within the network. -- Kind regards, M Harris <><
On Fri, Sep 22, 2006 at 12:46:09AM -0500, M Harris wrote:
On Thursday 21 September 2006 15:15, Josef Wolf wrote:
Sounds interesting. You made me curious. Can you please provide some more info?
First, see the above link for general info regarding the concept generally...
Well, I know the general concept of a cluster and beowulf. But you mentioned "network is your data". When you ssh to a different host to start your app, your data is located on this host. Having a tunnel to forward the display back to your original host don't make any difference here. So what do you mean when you say that your data is redundantly distributed? How is the data to be distributed to make sure it will be accessible from anywhere and won't disappear regardless which machines fail?
[ ... ] which specific drives are mirroring her spreadsheets... she only knows that this virtual space in this mega network looks to her like a real stand-alone machine... uh, virtual distributed network processing.... or, uh.... virtual distributed network hosting... I haven't decided yet. But the idea is that the "network brain" is completely redundant, auto-scaling, auto-backing, and transparent to the user. [ ... ] the network shifts, replicates/mirrors, everything stays up... all except for the brain cells that died.
This is why I asked. And this is obviosly more than just executing remotely with forwarded X-display (as you described on a different fork of this thread).
On Wed, September 27, 2006 10:19 am, Josef Wolf wrote:
On Fri, Sep 22, 2006 at 12:46:09AM -0500, M Harris wrote:
On Thursday 21 September 2006 15:15, Josef Wolf wrote:
Sounds interesting. You made me curious. Can you please provide some more info?
First, see the above link for general info regarding the concept generally...
Well, I know the general concept of a cluster and beowulf. But you mentioned "network is your data". When you ssh to a different host to start your app, your data is located on this host. Having a tunnel to forward the display back to your original host don't make any difference here. So what do you mean when you say that your data is redundantly distributed? How is the data to be distributed to make sure it will be accessible from anywhere and won't disappear regardless which machines fail?
I know this is way OT, but y'all might find it interesting. I was reading a few months back how Amazon distributes all their systems. When you go to amazon's homepage, you are actually seeing the result of several dozen different systems coming together. IIRC, they use any given OS (Linux, Unix, Wintendo) on any number of low-end whitebox systems distributed throught the country to generate data. They have well over 100,000 of these systems grouped together in different data centers. Each system is replicated x times depending on need to ensure reliable access. On any given day, they say up to 50 systems will fail. However, because of the massive redundancy, there's no effect on the users. They simply rip out the old system, put in a new one, run some sort of Ghost application, and the new system is on the network. Again, because of this redundancy and spread, they have no need to control what OS is being used or what codebase is needed or even what database is being used for the given function. They may use SUSE, RedHat, CentOS, WinXP/2K3, BSD or whatever fits the need for the given development group. I personally find it brilliant and forward-thinking. Okay, back to figuring out a GRUB issue on my second laptop. (BTW, I just tested - suspend to disk has been working like a charm on my Dell laptop. It didn't work in 10.0 or 9.3 reliably at all.) -- Kai Ponte www.perfectreign.com || www.4thedadz.com remember - a turn signal is a statement, not a request
PerfectReign wrote:
I know this is way OT, but y'all might find it interesting.
I was reading a few months back how Amazon distributes all their systems. When you go to amazon's homepage, you are actually seeing the result of several dozen different systems coming together.
IIRC, they use any given OS (Linux, Unix, Wintendo) on any number of low-end whitebox systems distributed throught the country to generate data. They have well over 100,000 of these systems grouped together in different data centers. Each system is replicated x times depending on need to ensure reliable access.
On any given day, they say up to 50 systems will fail. However, because of the massive redundancy, there's no effect on the users. They simply rip out the old system, put in a new one, run some sort of Ghost application, and the new system is on the network.
I think you are talking about Google, not Amazon. (^-^) Sandy -- List replies only please! Please address PMs to: news-reply2 (@) japantest (.) homelinux (.) com
On Wed, September 27, 2006 11:06 am, Sandy Drobic wrote:
PerfectReign wrote:
Ghost application, and the new system is on the network.
I think you are talking about Google, not Amazon. (^-^)
Sandy
Actually I have no idea what Google does. I wouldn't be suprised - since they're in the neighborhood - that they use Sun or Apple servers. Here's one article I read. I'll try and search for the other. http://www.acmqueue.com/modules.php?name=Content&pa=showpage&pid=388&page=1 -- Kai Ponte www.perfectreign.com || www.4thedadz.com remember - a turn signal is a statement, not a request
PerfectReign wrote:
I know this is way OT, but y'all might find it interesting.
I was reading a few months back how Amazon distributes all their systems. When you go to amazon's homepage, you are actually seeing the result of several dozen different systems coming together.
IIRC, they use any given OS (Linux, Unix, Wintendo) on any number of low-end whitebox systems distributed throught the country to generate data. They have well over 100,000 of these systems grouped together in different data centers. Each system is replicated x times depending on need to ensure reliable access.
On any given day, they say up to 50 systems will fail. However, because of the massive redundancy, there's no effect on the users. They simply rip out the old system, put in a new one, run some sort of Ghost application, and the new system is on the network.
I think you are talking about Google, not Amazon. (^-^)
Sandy Have you already heard of the Internet achieve? There are 3 world wide
Sandy Drobic wrote: centers with 1000s of computers each which exchange data permanently. bye Ronald
On 9/21/06, M Harris
I have experimented with this at home... most folks have one large more or less state-of-the-art pc with the latest toys attached... it goes down... and they're in the shop for a while.
My system is a distributed cluster of 14 older systems that never goes down. It is completely redundant, lightning fast, with self backup, mirroring, and processor sharing. And the cool thing is that authorized users (member of the household) can logon from one of five terminals (also nodes in the network) and access the entire cluster (or individual machines) including graphical interfaces to any of the nodes on the cluster if need be. Its fun, its fast, its reliable, and its cheap....
despite the power issues for a home user, I want to start playing in this sand box, too ... will you discuss a bit the hardware and software resources you have marshalled in your HPC? What interconnect - just GB ethernet? What linux distro and cluster packages? It would be way cool -- if you had an itch -- to write up a "what I did and how I did it". If you are using suse, then the suse wiki would be a good home. Or a "cool solutions" acticle. Anyway ... I found it interesting, and I believe that "grid" kinds of computing are, indeed, the future. It's kind of like ethernet ... as speeds get faster and faster, the MAC protocols underlying ethernet are not the best/most efficient/easiest to grow. But, because everyone knows and uses ethernet, it *ends up* being the architecture of newer and newer technologies. Well, linux and commodity x86 hardware *may not* be *the best* way to scale out, but ... I believe it will dominate. Peter
despite the power issues for a home user, I want to start playing in this sand box, too ...
will you discuss a bit the hardware and software resources you have marshalled in your HPC? What interconnect - just GB ethernet? What linux distro and cluster packages? I am not at the point where I can share the distribution source yet, but you can bet when I've got things a little closer to a 0.9 package level I will. (and it will be a gpl source package too, to be sure) But the following are some things to play with that anyone can do without any programming at all... ok, maybe some small setup, but minimal. Picture three machines (all headless, no desktops--- not even virutal ones) and one terminal, running kde--- my personal favorite desktop. I have
On Thursday 21 September 2006 15:54, Peter Van Lone wrote: three things I really want to do today... 1) monitor my mail all day, 2) recompile my experimental kernel as many times as an 8 hour day will allow, and 3) work on my distributed network package--- mostly compilation, tweak, and recompilation, repeating all day as necessary.... whew. Ok, ? All I use the terminal for is KDE and X. That's it--- that's all. The headless machines are assigned my three tasks... 1) kernel compiling 2) monitoring my mail all day 3) production development system all day In this scenario all four machines appear to me as one machine with four processors. All three major tasks appear to be running on my terminal... however, actually each process is running on its own processor somewhere out there on my network. This works well in this scenario because most of the work is processor intensive, and not network traffic intensive. Each task is run on a remote machine, passing its X output back to the terminal via compressed and secure stream over an SSH tunnel. The machines are set-up with firewall rules locking everything down except SSH with capability for X11 forwarding turned on. Consider the following terminal command: ssh -X -C -c blowfish -f myuser@remotehost "nohup xterm; sleep 2; exit" What happens here? SSH starts a background process (-f) with X11 forwarding (-X) compressed (-C) using blowfish encryption (-c) to my first remote system starting an x terminal (which will display through a virtual X11 connection back at my terminal machine) with hup signaling turned off so that when I logoff the initiating xterm the secure X11 channel will stay up. When I'm done and logoff the remote system the "sleep 2; exit" will clean things up nicely on the remote end. So how is this helpful?? This works with any application.... even vncviewer, of kontact, or whatever development environment you're using... emacs or whatever. The remote machines are used to provide the power behind the compilations, mail management, development, whatever, and the terminal I'm logged into is used for really nothing else except a remote X server. This is the simplest and easiest form of distributed processing that can be done with no programming... and its powerful. Essentially the four machines are working together as one large multiprocessor computer with one graphical user interface. Note: I should point out that on slower networks like mine (normal 10/100) the -C compression flag is essential. However, on fast internal networks it just slows things down... if you have a GB network all the more power to you... things will really fly. Note2: The -X flag is a client flag initiating X11 forwarding virtual display info... you don't need to manually set the display... however, you do need to turn on X11 forwarding on the ssh server at each remote node. -- Kind regards, M Harris <><
On Friday 22 September 2006 00:07, M Harris wrote:
ssh -X -C -c blowfish -f myuser@remotehost "nohup xterm; sleep 2; exit"
What happens here? SSH starts a background process (-f) with X11 forwarding (-X) compressed (-C) using blowfish encryption (-c) to my first remote system starting an x terminal (which will display through a virtual X11 connection back at my terminal machine) with hup signaling turned off so that when I logoff the initiating xterm the secure X11 channel will stay up. When I'm done and logoff the remote system the "sleep 2; exit" will clean things up nicely on the remote end. Part II.
In the above example I manually choose which of the free systems will actually take the load and I manually setup the comm links with ssh from a terminal. But there is a better way (and more fun) that uses the same technique with some minimal programming. But first... Start from the local system, open an xterm, and bring up an xterm on the remote system with the technique above. Now log off the xterm on the local system. Then try this on the remote xterm.... Start /opt/kde3/bin/kontact & (assuming Suse 10) using the xterm you opened on the remote system being displayed on your local X server. Then start /opt/kde3/bin/kcalc & (note: the & runs the process in the background on the remote system, however, the display will be forwarded back to your local box via the ssh tunnel just like the xterm was!) Experiment with a few of these... keep in mind that the app is running in remote memory on the remote processor... The remote processor is taking the load. Now... for Part II... this takes some programming which I have not released yet... Every machine will have a load daemon running on it. Each user node has a requestor daemon that intercepts service or app launch requests. Instead of icons being linked to local system launchers, icons are linked to the requestor daemon. The requestor daemon transmits a broadcast beacons (similar to dhcp requests) which are monitored and coordinated by the load daemons. With a couple of quick handshakes the requestor daemon is authorized to submit an ssh tunnel on the machine deemed to be the *most* free over the last load period. This is actually controlled by fairly complicated heuristics, but for a small local cluster like my own it can be setup with minimal complication. Now... Instead of launching local apps by manually setting up an ssh tunnel on a preselected machine, I just click an icon and the *most available* machine gets the ssh request and the app's X11 interface is forwarded back to my local machine in a seemless way. In other words, the apps are launched locally, run remotely (using the first relatively free machine) and then app X11 display shows up on my local machine as though it were running locally. Cool, huh? Its just the beginning..... -- Kind regards, M Harris <><
participants (13)
-
Basil Chupin
-
James Knott
-
Josef Wolf
-
M Harris
-
Paul Abrahams
-
PerfectReign
-
Peter Van Lone
-
Robert Lewis
-
Roger Oberholtzer
-
Ronald Wiplinger
-
Sandy Drobic
-
Stan Glasoe
-
Theo v. Werkhoven