On Sat, 2014-06-28 at 09:07 +0000, John Colvin via Digitalmars-d wrote:
Post by John Colvin via Digitalmars-d
I still maintain that the need for the precision of 80bit reals
is a niche demand. Its a very important niche, but it doesn't
justify having its relatively extreme requirements be the
default. Someone writing a matrix inversion has only themselves
to blame if they don't know plenty of numerical analysis and look
very carefully at the specifications of all operations they are
I fear the whole argument is getting misguided. We should reset.
If you are doing numerical calculations then accuracy is critical.
Arbitrary precision floats are the only real (!) way of doing any
numeric non-integer calculation, and arbitrary precision integers are
the only way of doing integer calculations.
However speed is also an issue, so to obtain speed we have hardware
integer and floating point ALUs.
The cost for the integer ALU is bounded integers. Python appreciates
this and uses hardware integers when it can and software integers
otherwise. Thus Python is very good for doing integer work. C, C++, Go,
D, Fortran, etc. are fundamentally crap for integer calculation because
integers are bounded. Of course if calculations are prvably within the
hardware integer bounds this is not a constraint and we are happy with
hardware integers. Just don't try calculating factorial, Fibonacci
numbers and other numbers used in some bioinformatics and quant models.
There is a reason why SciPy has a massive following in bioinformatics
and quant comuting.
The cost for floating point ALU is accuracy. Hardware floating point
numbers are dreadful in that sense, but again the issue is speed and for
GPU they went 32-bit for speed. Now they are going 64-bit as they can
just about get the same speed and the accuracy is so much greater. For
hardware floating point the more bits you have the better. Hence IBM in
the 360 and later having 128-bit floating point for accuracy at the
expense of some speed. Sun had 128-bit in the SPARC processors for
accuracy at the expense of a little speed.
As Walter has or will tell us, C (and thus C++) got things woefully
wrong in support of numerical work because the inventors were focused on
writing operating systems, supporting only PDP hardware. They and the
folks that then wrote various algorithms didn't really get numerical
analysis. If C had targeted IBM 360 from the outset things might have
We have to be clear on this: Fortran is the only language that supports
hardware floating types even at all well.
Intel's 80-bit floating point were an aberration, they should just have
done 128-bit in the first place. OK so they got the 80-bit stuff as a
sort of free side-effect of creating 64-bit, but they ran with. They
shouldn't have done. I cannot see it ever happening again. cf. ARM.
By being focused on Intel chips, D has failed to get floating point
correct in avery analogous way to C failing to get floating point types
right by focusing on PDP. Yes using 80-bit on Intel is good, but no-one
else has this. Floating point sizes should be 32-, 64-, 128-, 256-bit,
etc. D needs to be able to handle this. So does C, C++, Java, etc. Go
will be able to handle it when it is ported to appropriate hardware as
they use float32, float64, etc. as their types. None of this float,
double, long double, double double rubbish.
So D should perhaps make a breaking change and have types int32, int64,
float32, float64, float80, and get away from the vagaries of bizarre
type relationships with hardware?
Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.winder at ekiga.net
41 Buckmaster Road m: +44 7770 465 077 xmpp: russel at winder.org.uk
London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 181 bytes
Desc: This is a digitally signed message part