Can someone comment about the sensitivity of floating point calculations to different OS and architectures (mac, windows, linux …). How likely is that a certain numerical calculation works (in terms of precisions) on my laptop (macbook), but will fail for numerical reasons if someone runs the same code on windows, or linux, or a different CPU or with a math coprocessor, or an older/newer OS release, older/newer JDK etc etc?

The reason I ask, is because I’m planning a final project for my students in my, Introduction to Scala, class. The project will be to implement certain matrix computations functionally, including computation of transcendental functions (using foldLeft) such as sin, cos, exp of a matrix using the Taylor series expansion. The student will have to experiment with how many terms of the Taylor series to use to achieve the precision to pass the test suit. Can it be the case that the precision is OK on the student’s laptop, but fails when I run his code on my laptop?

Floating point calculations are the same for each architecture and conform to the binary32 for Float and binary64 standards of IEEE 754, and the same calculation will have the same result on your laptop and the students laptop.

IEEE 754 defines some instructions (like div), but not others (like sin)

Intel has made serious bugs in its first div, sin, cos, etc implementations

Intel Pentium’s fdiv bug was violating IEEE 754 spec so it was fixed

fsin implementation wasn’t however mandated by IEEE 754 so Intel turned a bug into feature and maintains bug-compatibility with its first FPUs

AMD maintains compatibility with Intel, but not 100%: I fed pseudo-random bits (converted to 80-bit long double using a C union) to /, sqrtl, sinl (implemented as x87 instructions), and hashed the results (converted back to bits), 1e6 times. The only difference I located was with sinl, which gave different hashes between Intel and AMD (but consistent between PIII and Core i7, as well as Athlon XP and Althlon 5050e). – fgrieu Oct 28 '12 at 20:36

software compensates for hardware bugs to meet quality specifications, so maybe software also compensate for differences between CPU vendors?

Testing floating point values for equality rarely makes sense, regardless of how standardized operations are for reproducibility, because floating point values are inherently approximations, which means formulas that are supposed to be mathematically equivalent may not give the same result.

For example, you cannot assume that things like:

a+b+c == c+b+a

a*(b+c) == ab + ac

(also, keep in mind Double.NaN == Double.NaN is false)

The standard solution is to specify the relative error you are willing to tolerate. Double precision is about 1e-16, but rounding errors propagate and accumulate. Different calculations vary in stability, but a rule of thumb is that the number of digits lost is about the base ten logarithm of the number of operations.

Yes, that’s exactly my point. I was asking whether the same calculation would be < epsilon on one architecture but >= epsilon on another archetecure/jdk/scala-version/OS/etc.

@curoli, yes, that’s the best that I can think of as well. The alternative is that I can use the same epsilon, and any student’s code which fails, is something I just have to look at at manually.