On 2023-12-05 05:01, Carlos E. R. wrote:
Looong ago, when you had to buy the coprocesor separately, I was told that long integers (whatever was the type back then) were needed for finantial calculations. You could not add trillions of dollars and drop a cent out to rounding.
Oops, I didn't directly address this point in my earlier post. The names used today for the integer data types are somewhat confusing. Originally there were bytes, words, and double words (8, 16 and 32 bits, respectively). Then came the quadword, which is 64 bits. In the earliest days, the CPUs had only 16 bit general purpose registers. The 386 was the first CPU to have 32 bit GPRs; this was the situation up until the development of the Intel 64-bit IA-64 processors. Since AMD wasn't invited to the table for that, they developed their own, which is today the x86-64, the industry standard since 1999. IBM found themselves forced to follow suit, so today, I don't think you'll find any new processors adhering to the IA-64 standard. OK, enough of the history lesson. Before the appearance of 64-bit processors, you definitely needed to use a math processor, or else do some fancy dancing with integer arithmetic, to be able to handle signed numbers exceeding approximately 2.14 billion (2³¹ -1). With 64-bit GPRs, the largest signed integer that can be handled directly with integer arithmetic exceeds 9.22x10¹⁸ (2⁶³-1), so unless you are doing something fancy like squaring a number in excess of about 3.04 billion, the CPU can handle it with integer math.