Anton Aylward wrote:
Dirk Gently said the following on 05/14/2012 04:44 AM:
the profs in the computing department were heavily involved with the hardware and doing "Control Systems" and other practical work rather than abstract things and the profs in "Electronics" were doing things that needed computers to analyse the data on control stuff in the lab.
That's analog circuits stuff, and has nothing to do with the teaching of programming techniques as a discipline.
First, its not always or entirely analogue. Yes, actuators/transducers may be linear - the speakers in the earbuds when you listen to your iPod/MP3 are analogue; the microphone when you talk using Skype is analogue.
But many control systems are logic sequencer or finite state machines.
True
As I mentioned, one bit slide project I worked on was doing FFT butterfly transforms - DIGITALLY. It was one of those "they said it
I can't think of any way to do FFTs without doing it digitally :-) And I can't think any truly analog way of doing FT in the analogue domain outside of the human ear for sound, and the prism for light.
couldn't be done" projects, not just doing it digitally, but doing transforms on a GHz data stream using <1MHz logic.
Excellant. I'd love to see what your technique was someday. There's a saying in the electrical engineering industries... IF you have a problem that "can't be solved", throw it at some guys who just got their bachelor's degrees -- and tell them that it's a simple enough problem that they shouldn't need micromanagement by the senior engineers. Fresh-out BSEE's haven't been taught "you can't do that" yet.
Programming techniques" are embodiments of algorithms. We've seen shifts back and forth between hardware and software. The Motorola 68K was the first micro to be microprogrammed; before that they were all 'random logic' that did the same thing. Mainframes went though a similar evolution.
Yeah. IBM invented microcode so that they could make standardized CPU hardware, and then just change the microcode to have a "Business" CPU or a "Scientific" CPU. Then they got really fancy and realized that in a timesharing system, they could bank-switch the microcode memory, so that the accountant types would see a "Business" CPU and the guys in R&D would see a "Scientific" CPU, even though both were time sharing not just the same model of equipment, but in fact, the SAME single computer installed in the building.
A lot of systems today use Programmable Gate Arrays, burn logic and algorithms into hardware. We see a similar shift in crypto; doing the transforms using hardware rather than software.
It's faster. Of course, voice encryption was always done in hardware, even when it was done by analogue means -- even with changeable keys.
The Turing et al models are "general purpose hardware" which is specifically targeted with software. That's because its easier to change the software (e.g. microcode). From an epistemological POV, it could equally well be a well proven chunk of "general purpose code" for which the hardware does the targeting. There is an equivalence from the POV of the mathematics :-)
Teaching "coding" (in particular in a specific language) and teaching *PROGRAMMING* are two entirely different things. I suspect many CS departments/profs do the former without doing the latter.
I think so.
There's a big debate going on now due to the whole Climate
brouhaha. The CRU leaks showed how truly awful the code was
being used at the center of all the climate studies... and
so some are calling for papers submitted to scientific journals
that rely on results from computer programs should require
that the code used be published (either in print in the
journal, or on-line).
The fiercest resistance is coming from the climate alarmism
camp. Jones doesn't want anyone seeing his data "because
they just want to criticize it and look for mistakes!" --
evidently not understanding that your results aren't really
validated UNLESS those who truly oppose your point of view
look at it and can't find any mistakes.
And he sure as hell didn't want anyone looking at his kludge-filled kode.
Not code... kode. Because it's that bad.
Other scientists are saying "I'm not a professional, and
so I don't want people laughing at my amateurish code" --
To maintain any sort of credibility, people who make any sort
of scientific claims based on the results of computer code
MUST put that code for inspection by people who, for example,
actually understand the fact that a computer floating point
numbers are *NOT* equivalent to mathematical numbers.
Anything less is the equivalent of writing up a chemistry
lab report, and in place of your calculations, just
writing ".. and then a miracle occurs ... "
The mathematical constant e can be calculated by the equation
n=infinity
---- n-1
e = 1 + / (-1)
\ ------
---- n!
n=1
Now, if you try to calculate e by
/* Note, not debugged.. not even for syntax errors */
#include