Hans Witvliet said the following on 07/24/2013 03:57 AM:
Let me chime in,
I'm so glad you're clarifying this and taking it in parts rather than painting with a ridiculously wide brush.
Indeed the question if your dealing with mem or reg isn't that important. More relevant is the size on the variable you are working on.
Right. Programming - certainly application programming - where, for example, loop counters are small (less than 2^32) then ... see earlier. Reality is that there is a poison curve here: most such counters are small. I don't think we have disagreement here.
In one of my lab exercises we had to write a arithmetic-lib. When operating on uint128 with an 8-bit cpu, it takes quite more time and code, compared with a cpu with broader registers.
Ah, lab exercises. I've got no argument with this either, but how often are embedded 8-bit micro-controllers dealing with uint128 variables? I spent nearly a decade dealing with such small machines, as I say, 'embedded systems' and a few higher level UI level/presentation level stuff. Yes there was a lot of 16 bit arithmetic and most of that was address calculations. The bulk of the work was flipping bits on and off in individual bytes. To be honest, a 16-bit machine wouldn't have been faster for that part of the work.
The other way round, assuming a 64-bit datapath between cpu-and mem, fetching a 64-bit variable is as expensive as an 8-bit one.
We've met this many times. Many of the IBM /360 machines had a cheap option of a narrow memory path but a wide computation path. The original IBM PC used an 8088 which was a 16-nit engine with a 8-bit external data path. False economies in my opinion, but that's what happens when marketing gets an upper hand.
Between those obvious extremes, the is a large grey area.
And there you have it. You might notice a theme in what I've been presenting. I've tended to focus on the application side of things. So ...
Secondly, the number of application (no number-crunching benchmarks) that uses var larger than 32-bit is very limited. It is more the OS that is using large numbers (mem, disk)
Oh YES! very YES! and Very YES! YES! YES! in server type applications which use large amounts of disk and large amounts of memory. As opposed to a typical (well here at least) desktop that uses between 1G and 4G of memory.
_IF_ (!) programmers never used variables larger than they really needed, i would say that 32-bit would save time&mem. (i mean using 32-bit virtual machines inside a 64-bit KVM/XEN)
Or raw 32-bit machines without virtualization ... That's not a big "IF". I'm sure if we go thorough the apps in the distribution, even the 64-bit distribution, we'll find most use smaller variables. Loop counters, character arrays and indexes, all that kind of stuff. There's a class of applications that use ether the BIGNUM arbitrary precision library of the floating point library - by design. I very much doubt those kinds of applications will benefit from from 64-bits as such. Yes, a 64-bit machine might be able to handle floating point better, faster, but its still floating point, which isn't what we're discussing here. You could just as well do what IBM do and have a 128-bit bus on a 64-bit machine to reduce bus cycles. Perfectly valid. You could have a 128-bit bus doing prefech/pipeline for an 8-bit machine as well ...
As explained above using vars (data/pointers) larger than your internal architecture is costly. In those cases using 32-bit is contra-productive.
Lets put that back in context since it works both ways: If you have a 64-but architecture, something that needs 64-but calculations to address memory, your database, then doing the arithmetic on a 32-bit machine is costly. If you are doing an application controller that is manipulating bytes by byte and masking, then you should ask if a 32-bit machine or a 64-bit machine is giving any advantage since you're not doing any 32-bit or 64-bit calculations. It maybe that sometimes you can do a wide mask operation, but for the most part, I found, such operations were 'narrow'. Realistically I found that that many needed algorithms that simply wouldn't fit into the ROM of a 8035 so I welcomes the advent of the 16-bit machines for their code-space rather than any computation. I loved the z-8000 because I implemented a very fast FORTH interpreter; it was fast because having the 16 registers that like on the PDP-11 or the /360 they were 'equivalent' (unlike the z-80/8085/8086/68000) I could keep the top few elements of the FORTH evaluation stack in the upper part of the register bank and use the lower part as pointers to other stacks. If you're familiar with FORTH you'll see what I mean.
Early on the thread, PAE was mentioned, i dare say it is an anachronism.
+1
Ten years ago, machines started to appear that could hold more than 4GB mem, but few applications were built for it. If you have 4GB or more installed, just go for a 64-bit OS.
Using that memory is the best justification I can think of.
Personally, i use mostly use 32bit. Also because i have also to deal with much older hardware.
You're not alone. Most machines here are not only 32-bit but have 1G or less of memory and slower CPUS. But then they are only used for e-mail, browsing, a bit of word processing and such. No CAD or gaming (Well I'm sure people play solitaire and such on the sly), no large photo processing or AV apps. 'Modelling' is mostly with mind-mapping using some Java applications. I'm privileged to have a 64-bit AMD desktop ... but only 4G of memory :-( The thing is that few machines have large disks; most are 20G, a few 40G. All the real disk storage is on the servers and accessed via NFS. Recall what I said about servers and gobs of memory and gobs of disk and 64-bit machines..... OK, so this isn't an 'internal cloud' -- yet. But I do know someone who has converted his home server to some cloud stack from simply being a a file server. Yes, he has a big family, big house and works at home with a 3-screen setup. He can justify all that more than some small businesses - he is a small business! I keep saying Context is Everything and that's what you should consider. One size does not fit all. -- Well, they asked me in February, and I said it was coming out in November. When they asked in March, I said it was coming out in November. In April I pointed out that November, in fact, was going to be when the next book came out. In May, when asked on many occasions about when Maskerade was coming out, I said November. In November, it will be published. The same November all the way through, too. -- So Terry, when is 'Maskerade' coming out, then? (Terry Pratchett, alt.fan.pratchett) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org