On 2023-12-05 04:56, Darryl Gregorash wrote:
On 2023-12-04 20:52, Carlos E. R. wrote:
On 2023-12-05 03:37, Darryl Gregorash wrote:
On 2023-12-04 18:24, David C. Rankin wrote:
All,
To celebrate the list working again, I'll pass along something that brought a smile to my face a bit ago. I was looking at help for kcalc, and I noticed a entry for long-double precision and clicked it:
https://paste.opensuse.org/pastes/b1300d1770a4
Remember when user-choice and customization was the touchstone of Linux? Back when the help pages told you about defines to tailor the app to your needs and then a simple:
./configure make make install
The good ole days.... :)
Ordinarily I hesitate to ask silly questions, but when was the last time you needed double precision calculation, much less long double? Never mind it's not even a IEEE standard..
Yet the processor has those types,, I assue so someone uses them.
Yeah, if you are working, for example, at places like CERN or LIGO.
Looong ago, when you had to buy the coprocesor separately, I was told that long integers (whatever was the type back then) were needed for finantial calculations. You could not add trillions of dollars and drop a cent out to rounding. First off, there is a substantial difference between integer data types and float point data types, which is what things like double precision are. Integers can be represented exactly in computer memory, provided one has a large enough number of bytes to represent them. Think bytes, words (2 bytes), double words (4 bytes), and so on. Float point data types are used to store numbers which are not integers. A float point data type contains 3 fields: the sign bit, the exponent field (variable in size, depending on the precision), and the significand field (again, length dependent on the precision). All of these fields are unsigned integers. For example, a single precision number is 32 bits in length, consisting of the sign bit, 8 exponent bits, and 23 bits for the significand, while a double precision number is 64 bits total, containing 11 exponent bits and 52 bits for the significand. This is really getting too long to continue, so I really recommend you go to Wikipedia, starting with https://en.wikipedia.org/wiki/Floating-point_arithmetic
On 2023-12-05 05:01, Carlos E. R. wrote: then continuing to https://en.wikipedia.org/wiki/Double-precision_floating-point_format When you've digested that, you may (well, probably will) have a question about how the number 0 is stored in float-point format. This is simple. If the exponent and significand are both zero, and the sign bit is "don't care", then the stored number is 0.
I remember the curiosity that the types available to the coprocesor did not match the ieee types, back then. Maybe the Intel people thought differently.
Certainly back in the Dark Ages (ie. 30+ years ago), I believe Intel did not follow IEEE format. Today, however, I believe they do use IEEE format.
I have not read again on the matter, but I am curious what is the status today. Maybe a wikipedia article?