#6
If by "reducing the accuracy" you mean "reducing the fraction", then the GCD algorithm works no matter what.

http://en.wikipedia.org/wiki/Binary_GCD_algorithm

Reducing a fraction does not reduce its accuracy. 3/9 is exactly as accurate as 1/3, but only the latter is a reduced fraction.

http://en.wikipedia.org/wiki/Irreducible_fraction

###
*thread starter*
*macrumors 603*

###

#7
Reduce the actual accuracy of the numbers the fractions represent, not just simplify the fraction.

(1359 / 2750) 0.5463 -> (11 / 20) 0.55

I already have a method of doing it, but it takes a long time to compute and I was wondering if there was a known way that's most likely a lot quicker.

Yes, I'm working on a Uni project where there's lots of lovely numbers like 1/3 is involved and there's no other way than fractions if you want maximum accuracy. Working in a minimum unit isn't really appropriate for the purpose of the project.

It would be bad programming practice to not account for the limitation of the CPU and allow an overflow exception to occur during operations on the fractions. You could flag the fraction and abort the operation. A much more desirable action would be to reduce the accuracy of the fraction so you stay within the limits of the CPU during the operation.