Hans Witvliet said the following on 03/20/2010 06:53 PM:
On Sat, 2010-03-20 at 15:31 -0700, Randall R Schulz wrote:
I expect we might see a dip in reliability but since I expect new models and languages explicitly designed for parallel computation to become dominant, I think it's likely that any such dip will be brief and we will come out on the other side with significantly better (-performing, more reliable) systems than we have now.
We will need new design and programming paradigms, tools and debugging tools.
Wasn't CHILL suppossed to be all of that? Designed for doing mass parallel computations in the telecom world....
There have always been specialized languages for specialized hardware. What you seem to be describing is the Single Instruction/Multiple Data Stream situation. The thing is, as the baseline CPUs have become more powerful, many of those specialized hardware things get subsumed. Once upon a time I built hardware FFT butterfly for radar data streams and a small but of microcode to handle the macrocode to make it useful. That was back in the days when a 5MHz Z-80 was blindly fast and a 0.75 GHz data stream was beyond its capabilities. How fast is the dual core in my laptop? 2.8GHz. Oh. One upon a time I used a big IBM mainframe with an array processor strapped on the side to do some wire-frame modelling; wire-frame because rendering was too computationally intensive to do at 1/4 real time. It used a modified form of APL to do the programming. Don't ever tell me that an interpreter is slow! So you run a Flight Simulator on your laptop. Quake. Myst. Oh. Sometimes sheer linear power trumps specialized hardware. Once upon a time I was told that because of the breakdown voltage of silicon semiconductors there was a limit to how fast they could be clocked because of the electron migration across the voltage that could be applied. The theoretical limit was a megahertz per volt. That made the 5 MHz Z-8000 which looked like a cross between a IBM370 and PDP-11 "the most powerful chip that could ever be made". Where is it now? Oh. Once upon a time an engineer at AMD told me that it would be impossible to make silicon wafers of greater than 5" diameter until we had zero-G factories because between 'pulling' the crystal and 'cutting' it flaws would be introduced in the lattice that made the wafers unusable; 5" was the practical limit. Last time I visited AMD they were using 12" wafers Oh. Yes 'a new generation' will grow up that can deal with the new technology and techniques. "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." - physicist Max Planck I suspect the same applies with technology. -- helicopter (n): 30,000 parts in tight orbit around a hydraulic fluid leak, waiting for metal fatigue to set in. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org