Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

burnsranch

macrumors member
Original poster
Jun 19, 2013
81
5
25 years ago, we wondered if the computer internet boom would have a positive or negative effect on productivity. We wondered if games and other social impacts would decrease productively, or if the technology would increase productivity.

I bought a nMP to upgrade an iMac, and after 35 years of plugging cards into busses, I am gad I do not have that option with the nMP.

I got to thinking about computational accuracy the other day. I somewhat assumed this was part of the difference between a professional workstation and a home computer. I was sort of surprised when I searched the WEB, the only work in it was close to 20 years old.

It is real clear a close machine like the nMP could be designed to be far more accurate than a bus based machine where cards are interchanged. I was curious how the industry is measuring the accuracy of professional grade workstations.

I sure did not find much in my web searches. I am curious how the professional industry certifies computational accuracy in workstations, or do we just take it for granted?
 
It is real clear a close machine like the nMP could be designed to be far more accurate than a bus based machine where cards are interchanged.

Huh?

You do realize that TB is letting you plug into a "bus" just like a card, only less of the bus is opened up to you.

I don't think your logic works at all here.

You might want to look into double precision versus single precision.
 
I have understand single and double precision since around 1980.

There is a computational limit to both methods depending on the register size that gets compounded with each loop. It was not my ball of wax, but at some point you have to understand the precision limits of the tool you are using.

There is a lot of variables in measuring the precision and accuracy, and the workstation vendors sell precision workstations, I just was curious how they are measuring it?
 
I think the question of computational precision was mainly solved when the IEEE754 was released. I guess any mainstream CPU is capable to support IEEE754 and the required precision levels.

Furthermore we have bignum nowadays. Which moves the question of the precision from the hardware implementation to the software side of things. Decent programming language have built in support for bignum. At least Common-Lisp has, that's all I care about. I guess many of the hip and cool languages—yes, I'm looking at you, JS, Objective-C, and Swift—need some additional libraries to use bignum.

So for me, when a workstation manufacturer talks about precision, I'd rather think of the precision of the manufacturing process. Because the question of computing precision seems to be solved. Even the GPUs can reach acceptable levels of precision, however it is easier to run into a precision limit on a GPU than on a CPU.
 
It looks like we have ignored this little issue. I was surprised we do not have precision and accuracy benchmarks in place yet. It is in interesting problem when you start using gpu's in concert with cpu's.

It might become a bigger deal with 3d printing.
 
I do recall that Intel released a processor chip perhaps 15 years or so ago that, under some very limited pathological circumstances, would give an incorrect mathematical result. They had some very red faces and quickly fixed it. Other than that, I'm not aware of any issues of questionable results from any mainstream computer.
 
It might become a bigger deal with 3d printing.

what do you mean by accuracy? are you talking about floating point math accuracy or something else?

ie- say the computer is limited to 3 decimals..

1/3" = .333
3 * .333 = .999

but it should be 1.000"

is that the type of problem you're talking about?

---
not sure what you mean by the 3d printing bit.. or- computerized bots are already fabricating at incredible precision.. is that limit currently something which is due to computer inaccuracy or is it something else? why will 3d printing need more accuracy than current cnc etc?

(fwiw, i do assume that future tech will require tighter tolerances than we're now capable of.. nanotech and whatnot.. just that i think there are tougher areas to tackle besides computer accuracy in order to reach new levels)
 
Last edited:
At the core of every cpu is an accuracy issue. Intel screwed up about 15 years ago, and had errors in their algorithms. IEEE 754 sets the standards for how to handle the inaccuracies of the processor. The real issue is we do not measure the accuracy of the device as a whole.

Logically you can understand the precision of a single algorithm. 64 bit registers go down to 1/2^50 or something like that, but a tool is no more accurate than the least accurate part.

When you get into loops of algorithms, the accuracy decreases. If you run the loop in a different machine you get different numbers. IEE 574 attempts to make the numbers consistent, but there is a limit to the accuracy of a computer.

The bigger problem, is we have not even started on the problem of measuring the accuracy of machines. We push the problem further down the road, with 64 bit, 128, 256 bit numbers, but unless we understand what the real accuracy of the machine is, we will never know when we exceed its limits.

If I use my calculator to figure out 1/(2^11) I get .000976563 The real answers is 0009765625.

While both numbers create the same hex representation in 32 bit code, 41700400 (the real number is 15.000976563)

The difference creates two numbers in 64 bit code.
402E008000000000 402E008000044B83

My calculator on my computer and my TI hand calculator have 32 bit accuracy.
The both give me the same wrong numbers.

While transferring data from a 32 bit machine to a 64 bit machine is a disaster from a precision and accuracy point of view, the problem is more complex

http://crd-legacy.lbl.gov/~dhbailey/dhbpapers/hpmpd.pdf

I only understand the problem at a basic level, but I understand you cannot use a computer as a complex modeling tool, unless you understand the precision and accuracy of the tool first. If you brows their the above link, you find the are finding accuracy problems with the modeling. That is a scary direction, because only a few people work realize and understand the accuracy and precision issues with computers.

To be honest, I fully expected to find an accuracy and precision benchmark that rated my machine to set of industry standards. The lower the accuracy and precision number are great for a game machine, the higher numbers for scientific work.

If we do not understand the accuracy and precision of my simple nMP, how do we understand the accuracy and precision of the new super computers which are tens of thousands of nMP tied together?

Logically, you can work around the issues, but unless you have a quality way to measure the accuracy and precision of your work, you are just guessing.

If you take it to the next level, say you have a 3D scanning and printing system to reproduce transmission gears. Unless you have a standard way to certify the accuracy of the machines, there is no way the port the data between machines and with any guarantee of the accuracy and precision needed. There may no even be a way to scan and print the part accurately, because we really have no way of measuring the accuracy and precision on a simple machine.

It is interesting that computer science has designed processors with logical computational flaws and does not have a method to measure the accuracy and the precision of the tool. They are depending on logical accuracy and ignoring real accuracy.
 
At the core of every cpu is an accuracy issue. Intel screwed up about 15 years ago, and had errors in their algorithms. IEEE 754 sets the standards for how to handle the inaccuracies of the processor. The real issue is we do not measure the accuracy of the device as a whole.

Logically you can understand the precision of a single algorithm. 64 bit registers go down to 1/2^50 or something like that, but a tool is no more accurate than the least accurate part.

When you get into loops of algorithms, the accuracy decreases. If you run the loop in a different machine you get different numbers. IEE 574 attempts to make the numbers consistent, but there is a limit to the accuracy of a computer.

The bigger problem, is we have not even started on the problem of measuring the accuracy of machines. We push the problem further down the road, with 64 bit, 128, 256 bit numbers, but unless we understand what the real accuracy of the machine is, we will never know when we exceed its limits.

If I use my calculator to figure out 1/(2^11) I get .000976563 The real answers is 0009765625.

While both numbers create the same hex representation in 32 bit code, 41700400 (the real number is 15.000976563)

The difference creates two numbers in 64 bit code.
402E008000000000 402E008000044B83

My calculator on my computer and my TI hand calculator have 32 bit accuracy.
The both give me the same wrong numbers.

While transferring data from a 32 bit machine to a 64 bit machine is a disaster from a precision and accuracy point of view, the problem is more complex

http://crd-legacy.lbl.gov/~dhbailey/dhbpapers/hpmpd.pdf

I only understand the problem at a basic level, but I understand you cannot use a computer as a complex modeling tool, unless you understand the precision and accuracy of the tool first. If you brows their the above link, you find the are finding accuracy problems with the modeling. That is a scary direction, because only a few people work realize and understand the accuracy and precision issues with computers.

To be honest, I fully expected to find an accuracy and precision benchmark that rated my machine to set of industry standards. The lower the accuracy and precision number are great for a game machine, the higher numbers for scientific work.

If we do not understand the accuracy and precision of my simple nMP, how do we understand the accuracy and precision of the new super computers which are tens of thousands of nMP tied together?

Logically, you can work around the issues, but unless you have a quality way to measure the accuracy and precision of your work, you are just guessing.

If you take it to the next level, say you have a 3D scanning and printing system to reproduce transmission gears. Unless you have a standard way to certify the accuracy of the machines, there is no way the port the data between machines and with any guarantee of the accuracy and precision needed. There may no even be a way to scan and print the part accurately, because we really have no way of measuring the accuracy and precision on a simple machine.

It is interesting that computer science has designed processors with logical computational flaws and does not have a method to measure the accuracy and the precision of the tool. They are depending on logical accuracy and ignoring real accuracy.

Ahh, I took a BRIEF look at the document you linked to, and it refers to exponential divergence in chaotic systems and ill-conditioning in large matrix operations. Those are definitely problems associated with computing workloads, but the reality is that, in the first case with exponential divergence, most of the phenomena shift are results of bifurcations of parameters and the actual differences in simulated results without that bifurcation (by changing parameters) are not too meaningful from a per-simulation-run kind of view. So, you typically average the hell out of the results to obtain a decent phase-space (solution space) picture for each parameter run, which eliminates the precision-induced errors. In the 2nd case with the ill-conditioned matrices, you need to simply pre-condition them so the relevant operations will not make divisions against tiny numbers. This is easily done through some linear algebra operations. Most computational laboratory programs takes into account conditioning or allows it to be invoked easily like Matlab.

There is no general solution to the precision issue. Every model has to be explored against noise. Precision-errors can be explored in such a way, too. I don't know much outside my field for other types of models, but if you introduce some noise above the precision-error threshold, and your phenomena isn't perturbed by it, then you are safe-guarded against the precision-errors. In some models, you may even be able to calculate how much precision you'll need, depending on the complexity.
 
If you take it to the next level, say you have a 3D scanning and printing system to reproduce transmission gears. Unless you have a standard way to certify the accuracy of the machines, there is no way the port the data between machines and with any guarantee of the accuracy and precision needed. There may no even be a way to scan and print the part accurately, because we really have no way of measuring the accuracy and precision on a simple machine.

with the example you're using, a computer's accuracy is out of the equation and it's a question of the software's accuracy (or scanning hardware's accuracy).

assume the scanner has no limitations regarding accuracy.

bring that scan into SketchUp and your precision is going to be limited to ~.001"

bring the scan into rhino and your accuracy is limited to approx 15 decimal places. (its algorithms are apparently capable of greater precision so in rhino's case, the computer is the limiting factor)
http://www.rhino3d.com/accuracy

.000000000000001mm is way way tighter tolerance than any manufacturing ability and with rhino, you're able to control your working tolerance.. nobody uses it maxed out like that.. it's highly impractical.. (and really, your initial scan will be far from being this precise)

---

but it sounds like you're suggesting i could model something in rhino on an imac then send it over to a PC and the files will be different due to a difference in cpus between the two machines?

i seriously don't think that's likely and any mismatches are going to be software related.. the scale you're talking about working at (gear scans) are software problems.. you could take a 128bit computer and a 32bit computer-> load sketchup on both -> both systems are going to hit the same accuracy limit due to the software being used long before either of the computer's inherent accuracy limits are reached.

i guess my whole point is "please, no more benchmarks!" ..make a benchmark like that and this forum will start arguing about it even though nobody here has a use for it nor software to work at such precision.. further, i don't think a benchmark will be testing cpuA vs cpuB.. it will just be testing whatever mathematical protocol are in place on said machine.
 
Last edited:
I think you are making a mountain out of a small grain of dust.

If this were an actual problem there would be wings falling off airplanes because the bolt holes didn't line up.

And I'm still curious why you dropped your whole:

It is real clear a close machine like the nMP could be designed to be far more accurate than a bus based machine where cards are interchanged.


As that was clearly a bunch of malarkey.

Were you aware that the GPUs used in nMP dropped the ECC RAM used in the REAL workstation versions of the cards? So if Apple was even mildly concerned about "accuracy" in it's "close" nMP it's "real clear" a questionable move.

I honestly think your time would be better spent buying a telescope and worrying about a meteor hitting the earth.
 
I honestly think your time would be better spent buying a telescope and worrying about a meteor hitting the earth.

that's one area where we will possibly see critical errors with current computer accuracy.. sort of.

if you want to simulate a meteor's path within the next hundred years, you could comfortably do so to an accurate or accurate enough degree.

if you want to simulate the path for the next 10billion years, minuscule errors may begin to compound over that amount of time and at the end, you still may be left wondering if the rock is going to collide with the earth in 10 billion years.

point being- current computers aren't 100% precise and advancements still need to happen in this area.. however, with the examples being used by burnsranch in the thread (3D printing and scanning.. manufacturing), current computers are plenty accurate for that.. (though you need to choose software wisely if needing incredibly precise work.. ie- don't try to design a mirror for hubbleteleII with sketchup)
 
that's one area where we will possibly see critical errors with current computer accuracy.. sort of.

if you want to simulate a meteor's path within the next hundred years, you could comfortably do so to an accurate or accurate enough degree.

if you want to simulate the path for the next 10billion years, minuscule errors may begin to compound over that amount of time and at the end, you still may be left wondering if the rock is going to collide with the earth in 10 billion years.

point being- current computers aren't 100% precise and advancements still need to happen in this area.. however, with the examples being used by burnsranch in the thread (3D printing and scanning.. manufacturing), current computers are plenty accurate for that.. (though you need to choose software wisely if needing incredibly precise work.. ie- don't try to design a mirror for hubbleteleII with sketchup)

Perfect precision isn't important in the meteor scenario. Typically for things like this, you simulate many many different runs and average the outcome to get an idea what is the most probable outcome. You also add noise to the trajectory say by minute perturbations (always above the precision limit, which renders the precision limit moot) to explore even more of the possible outcomes.

A lot can be overcome by simple averaging over many measurements/simulation runs.
 
this is all way over my head and i understand the need for precision, but to quote Plato, "there is no perfect, only the idea of perfect", meaning that no matter how hard we aim for precision, it will never be perfect. The differences in these errors are so miniscule that they really do not matter, it seems. I often think about taking measurements. The amount of numbers between 3 inches and 4 inches are presumably infinite, but they are finite in that they can only exist between 3 and 4, therefore 3 9/16" is close enough to build a table. obviously, there is a monumental level of difference between building furniture and 3D printing, but it sounds like you're asking a printer to print down to the molecule and whether that molecule is placed here or nudged a measurement of space there, the end product should function the same.

again, this is all over my head and i probably don't understand the level of precision you require, but if it's true perfection you desire, tis but a lost cause.
 
Were you aware that the GPUs used in nMP dropped the ECC RAM used in the REAL workstation versions of the cards? So if Apple was even mildly concerned about "accuracy" in it's "close" nMP it's "real clear" a questionable move.

ECC isn't even for "computational accuracy", it is for accuracy while in transit.

I'm not arguing with you, I'm agreeing with you and I'm pointing out that computational accuracy is almost of nill importance now-a-days, at least for us consumers and prosumers. What accuracy is important though is accuracy when dealing with physical hardware and that's there stuff like ECC memory and checksum'ed harddrive data comes into play.

Storage accuracy is the big thing right now because we have finally gotten to the point that the "impossibly high" error rate of the past is now something that can exist on 1 or 2 of the largest harddrives available.
 
I'm pointing out that computational accuracy is almost of nill importance now-a-days, at least for us consumers and prosumers.

nah.. same goes for professionals as well.. most, by far, professionals are using the same computers that everyone else are using.

pretty sure if you're doing research or work which is more demanding, accuracy wise, than what can be done on a typical intel chip, you're already going to know that since you're probably smarter than most people and seek alternate means.. i don't think you'd need a benchmark to compare dell vs hp vs apple in this regard.

i really don't know what field currently demands more accuracy than can be provided by today's computers.. maybe something at nasa etc?

but using the large hadron collider for example:
"The scientists at CERN decided to focus on using relatively inexpensive equipment to perform their calculations. Instead of purchasing cutting-edge data servers and processors, CERN concentrates on off-the-shelf hardware that can work well in a network."

http://science.howstuffworks.com/science-vs-myth/everyday-myths/large-hadron-collider6.htm
 
I got to thinking about computational accuracy the other day. I somewhat assumed this was part of the difference between a professional workstation and a home computer. I was sort of surprised when I searched the WEB, the only work in it was close to 20 years old.

It is real clear a close machine like the nMP could be designed to be far more accurate than a bus based machine where cards are interchanged. I was curious how the industry is measuring the accuracy of professional grade workstations.

Sorry, no. What you are asking for is simply not feasible. A machine of your description would require customary designed processing units and its cost would go up to millions of dollars. Maybe in the future when CPUs/GPUs get hardware support of arbitrary precision numbers. But this will take a very long time.

There is no difference in precision between customer and workstation computers. The highest precision the popular hardware can utilise is the 64bit IEEE 754 double, even though some CPUs can use higher precision internally. Same goes for GPUs, which usually can perform double-precision computations at reduced speed.

The double precision is enough for most practical applications. Of course, there can always be problems, especially with complexes computations and this is where things like numerical stability etc. come in play. In short, there are ways to improve a computational algorithm so that you get better final precision when operating with imprecise numbers.

In addition, in the rare case you need more precision than offered by hardware, you can always emulate higher (or even infinite-precision) numbers. However, in the vast majority of cases, developing numerically sound algorithm and using hardware doubles will result in the same precision and major performance benefits.
 
nuclear weapons simulation, weather prediction, atmospheric research ?

Nope, at least for me. I simulate turbulence in the atmosphere and model larger scale meteorological features. I've done so on Mac Pros for almost a decade. Our limiting factor isn't computational accuracy. More important are: the assumptions that are made in deriving our governing equations, the errors associated with initial conditions, the limitations of our physical understanding, and limitations of raw computing power.

You give too much credit :p
You hurt me deeply :)
 
"The metre or meter (American spelling), (SI unit symbol: m), is the fundamental unit of length (SI dimension symbol: L) in the International System of Units (SI), which is maintained by the BIPM.[1] Originally intended to be one ten-millionth of the distance from the Earth's equator to the North Pole (at sea level), its definition has been periodically refined to reflect growing knowledge of metrology. Since 1983, it has been defined as "the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second."

If you cannot measure the accuracy of your measurement or your calculations you do not have a tool you can use for science.

I know that roughly 10 loops of a square root exceeds the accuracy of my cheap calculator. I know how to certify my meter stick is one meter.

If I have no way to measure the precision and accuracy of my computer I do not know what it is. I do know changing hardware changes the accuracy, but there are no standard benchmarks to measure the changes.

You have no way to confirm a machine you are using is accurate or defective. Intel proved that in 1985.

It is not a problem for our needs at the present, but if you start to base science and engineering in tool that have now know accuracy, only guesses, you will run into problems.

I understand the precision issues of simple cpu math, but not sin cos tan and other functions. I understand the difference ECC ram would make to the accuracy, I understand how IEEE fixed the rounding errors in a standard was, but we still do not have the ability to measure the accuracy and precision of computers.

This is like the modern youth at the cash register who does not know how to count out change with out the computer figuring it out for him.
The only argument is the errors are more complex than most people can understand.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.