Re: [opensuse] systemload / cpu's
On Saturday March 20 2010, Anton Aylward wrote:
Anders Johansson said the following on 03/20/2010 02:54 PM:
On Sat, 2010-03-20 at 12:43 -0400, Anton Aylward wrote:
Aye, and with that comes a whole new raft of hazards and problems, including many to do with the various aspects of security.-
Security aspects of SMP?
Apart from certain race condition exploits possibly being easier to achieve, I have no idea what that would be. Could you expand a little on what you mean?
Right now its open-ended.
Only if software development continues to be done with the tools of the pre-multi-CPU era. I think software Darwinism will make quick work of those efforts.
Security isn't *just* about keeping hackers out. There are issues of integrity and availability to consider, never mind the Parkerian Hexad.
I shall not ever mind it.
Per has already mentioned that programmers are unfamiliar with the kind of multi-threading required for effective and efficient use of multi-core - arbitrary numbers not just dual core - programming. We can expect error and we cant be sure what those errors will be.
Programmers are not, as a whole, unfamiliar with these issues, to the extent they are at all settled matters of computational theory. Good means to exploit multi-CPU systems are relatively few and nascent at this point and it's true that we're going to have to expect a phase during which bugs rooted in concurrency issues not faced with uniprocessor systems will occur. As usual, people who don't live in the world of software engineering see that world as populated by bumbling tinkerers who only produce working systems by accident.
While multi-threading is not new, few programmers have been forced to use it for real in their own code. Very few programmers have experience with *true* multiprocessing until now.
Possibly true, depending on how you quantify those qualifiers "few" and "for real,", but the research community has been seriously exploring the conceptual and software tools require to effectively, efficiently and reliably exploit multi-processor systems for quite some time and the fruits of those labors are now becoming available to the work-a-day programmer.
With this new programming paradigm the average programmer is confronted with the need to understand much more and write code that is cleaner to a much higher degree than ever before. Very likely there will be pressure to make more applications multi-threaded, lest they'll be too slow to be competitive.
This is not really true. Every one working in this field is pursuing solutions that alleviate the need to deeply understand myriad issues an "corner cases" of concurrent systems. This is much as operating systems protect application programmers from the myriad of hardware used in modern computing platforms and the way programming languages allow programmers to express their software concepts using concepts and notations far removed from both the simplicity of today's CPU programming models as well as the arcane complexity of those same models (ironically enough).
It's also *much* harder to test and debug multi-threaded code. The languages we use, most notably C and C++, don't provide for much in the way of error avoidance or detection of interlock, contention and race conditions, either. The bugs are also going to be much more dependent on machine environment, other software and so on, making reproduction of issues incredibly difficult.
C and C++ are unlikely to be the languages used when we move into the era of multi-CPU systems and explicitly parallel software designs. I don't believe anyone thinks that thread-based designs will suffice and they are not what is being developed for application programming.
Due to the multi-cores, we're probably at the threshold of an era with an entirely new set of program bugs. Classic statistical testing and even execution path analysis will be less effective that ever.
I expect we might see a dip in reliability but since I expect new models and languages explicitly designed for parallel computation to become dominant, I think it's likely that any such dip will be brief and we will come out on the other side with significantly better (-performing, more reliable) systems than we have now.
We will need new design and programming paradigms, tools and debugging tools.
I don't see them yet.
Look harder. Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Saturday 20 March 2010 23:31:58 Randall R Schulz wrote:
On Saturday March 20 2010, Anton Aylward wrote:
Anders Johansson said the following on 03/20/2010 02:54 PM:
On Sat, 2010-03-20 at 12:43 -0400, Anton Aylward wrote:
Aye, and with that comes a whole new raft of hazards and problems, including many to do with the various aspects of security.-
Security aspects of SMP?
Apart from certain race condition exploits possibly being easier to achieve, I have no idea what that would be. Could you expand a little on what you mean?
Right now its open-ended.
Only if software development continues to be done with the tools of the pre-multi-CPU era. I think software Darwinism will make quick work of those efforts.
This whole conversation is slightly surreal. When I went to university in the mid 90s, we looked at SMP issues, and I refuse to believe that it has become less common since then. This isn't anything new. SMP has been with us for a very long time, and it is only getting more common. Even windows programmers have been exposed to SMP for at least a decade. I seriously don't think this is a problem we need to be overly concerned with. Yes, there are ongoing research efforts to get more performance out of SMP, but in the 12 years since I was left uni, I have not seen any significant security issues that came from running on more than one CPU, so I will choose to ignore this risk for now. Yes there are deadlock issues and similar headaches, but I seriously don't see any security problems there Anders -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Saturday March 20 2010, Anders Johansson wrote:
On Saturday 20 March 2010 23:31:58 Randall R Schulz wrote:
On Saturday March 20 2010, Anton Aylward wrote:
Anders Johansson said the following on 03/20/2010 02:54 PM:
... Security aspects of SMP?
Apart from certain race condition exploits possibly being easier to achieve, I have no idea what that would be. Could you expand a little on what you mean?
Right now its open-ended.
Only if software development continues to be done with the tools of the pre-multi-CPU era. I think software Darwinism will make quick work of those efforts.
... but I seriously don't see any security problems there
Yes, I agree. Multi-CPU systems present the only way to continue the now played-out performance boosts we (software types) were getting for free for a few decades from the chip makers. So now the onus is on the developers of languages and tools that make exploiting multi-CPU systems a manageable job for programmers. But I see very little added security risk in the move to explicitly parallel solutions to performance challenges. I get a strong sense of fear of the unknown from Mr. Aylward.
Anders
Randall Schulz -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Randall R Schulz said the following on 03/20/2010 07:02 PM:
I get a strong sense of fear of the unknown from Mr. Aylward.
Ah, time for the Rumsfield quote: As we know, There are known knowns. There are things we know we know. We also know There are known unknowns. That is to say We know there are some things We do not know. But there are also unknown unknowns, The ones we don't know We don't know. -- Donald Rumsfeld, February 12, 2002, Department of Defense news briefing Its the unknown unknowns that bother me. They should bother you too. -- The absence of alternatives clears the mind marvelously. -- Henry Kissinger -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
On Sat, 2010-03-20 at 15:31 -0700, Randall R Schulz wrote:
I expect we might see a dip in reliability but since I expect new models and languages explicitly designed for parallel computation to become dominant, I think it's likely that any such dip will be brief and we will come out on the other side with significantly better (-performing, more reliable) systems than we have now.
We will need new design and programming paradigms, tools and debugging tools.
Wasn't CHILL suppossed to be all of that? Designed for doing mass parallel computations in the telecom world.... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
Hans Witvliet said the following on 03/20/2010 06:53 PM:
On Sat, 2010-03-20 at 15:31 -0700, Randall R Schulz wrote:
I expect we might see a dip in reliability but since I expect new models and languages explicitly designed for parallel computation to become dominant, I think it's likely that any such dip will be brief and we will come out on the other side with significantly better (-performing, more reliable) systems than we have now.
We will need new design and programming paradigms, tools and debugging tools.
Wasn't CHILL suppossed to be all of that? Designed for doing mass parallel computations in the telecom world....
There have always been specialized languages for specialized hardware. What you seem to be describing is the Single Instruction/Multiple Data Stream situation. The thing is, as the baseline CPUs have become more powerful, many of those specialized hardware things get subsumed. Once upon a time I built hardware FFT butterfly for radar data streams and a small but of microcode to handle the macrocode to make it useful. That was back in the days when a 5MHz Z-80 was blindly fast and a 0.75 GHz data stream was beyond its capabilities. How fast is the dual core in my laptop? 2.8GHz. Oh. One upon a time I used a big IBM mainframe with an array processor strapped on the side to do some wire-frame modelling; wire-frame because rendering was too computationally intensive to do at 1/4 real time. It used a modified form of APL to do the programming. Don't ever tell me that an interpreter is slow! So you run a Flight Simulator on your laptop. Quake. Myst. Oh. Sometimes sheer linear power trumps specialized hardware. Once upon a time I was told that because of the breakdown voltage of silicon semiconductors there was a limit to how fast they could be clocked because of the electron migration across the voltage that could be applied. The theoretical limit was a megahertz per volt. That made the 5 MHz Z-8000 which looked like a cross between a IBM370 and PDP-11 "the most powerful chip that could ever be made". Where is it now? Oh. Once upon a time an engineer at AMD told me that it would be impossible to make silicon wafers of greater than 5" diameter until we had zero-G factories because between 'pulling' the crystal and 'cutting' it flaws would be introduced in the lattice that made the wafers unusable; 5" was the practical limit. Last time I visited AMD they were using 12" wafers Oh. Yes 'a new generation' will grow up that can deal with the new technology and techniques. "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." - physicist Max Planck I suspect the same applies with technology. -- helicopter (n): 30,000 parts in tight orbit around a hydraulic fluid leak, waiting for metal fatigue to set in. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org
participants (4)
-
Anders Johansson
-
Anton Aylward
-
Hans Witvliet
-
Randall R Schulz