At the core of every cpu is an accuracy issue. Intel screwed up about 15 years ago, and had errors in their algorithms. IEEE 754 sets the standards for how to handle the inaccuracies of the processor. The real issue is we do not measure the accuracy of the device as a whole.

Logically you can understand the precision of a single algorithm. 64 bit registers go down to 1/2^50 or something like that, but a tool is no more accurate than the least accurate part.

When you get into loops of algorithms, the accuracy decreases. If you run the loop in a different machine you get different numbers. IEE 574 attempts to make the numbers consistent, but there is a limit to the accuracy of a computer.

The bigger problem, is we have not even started on the problem of measuring the accuracy of machines. We push the problem further down the road, with 64 bit, 128, 256 bit numbers, but unless we understand what the real accuracy of the machine is, we will never know when we exceed its limits.

If I use my calculator to figure out 1/(2^11) I get .000976563 The real answers is 0009765625.

While both numbers create the same hex representation in 32 bit code, 41700400 (the real number is 15.000976563)

The difference creates two numbers in 64 bit code.

402E008000000000 402E008000044B83

My calculator on my computer and my TI hand calculator have 32 bit accuracy.

The both give me the same wrong numbers.

While transferring data from a 32 bit machine to a 64 bit machine is a disaster from a precision and accuracy point of view, the problem is more complex

http://crd-legacy.lbl.gov/~dhbailey/dhbpapers/hpmpd.pdf
I only understand the problem at a basic level, but I understand you cannot use a computer as a complex modeling tool, unless you understand the precision and accuracy of the tool first. If you brows their the above link, you find the are finding accuracy problems with the modeling. That is a scary direction, because only a few people work realize and understand the accuracy and precision issues with computers.

To be honest, I fully expected to find an accuracy and precision benchmark that rated my machine to set of industry standards. The lower the accuracy and precision number are great for a game machine, the higher numbers for scientific work.

If we do not understand the accuracy and precision of my simple nMP, how do we understand the accuracy and precision of the new super computers which are tens of thousands of nMP tied together?

Logically, you can work around the issues, but unless you have a quality way to measure the accuracy and precision of your work, you are just guessing.

If you take it to the next level, say you have a 3D scanning and printing system to reproduce transmission gears. Unless you have a standard way to certify the accuracy of the machines, there is no way the port the data between machines and with any guarantee of the accuracy and precision needed. There may no even be a way to scan and print the part accurately, because we really have no way of measuring the accuracy and precision on a simple machine.

It is interesting that computer science has designed processors with logical computational flaws and does not have a method to measure the accuracy and the precision of the tool. They are depending on logical accuracy and ignoring real accuracy.