PDA

View Full Version : Reducing the accuracy of Fractions




MorphingDragon
Jun 20, 2011, 09:47 PM
This is more of a mathematics question than programming.

Say I have fractions a/b and c/d.

If say a * c > UINT_MAX or b * d > UINT_MAX, how could I reduce the accuracy of one or both the fractions without knowing the decimal value to fit inside UINT_MAX? This is assuming the fractions are already in their simplest form.



balamw
Jun 20, 2011, 10:06 PM
Spitballing.

Look for max(a,b,c,d) >sqrt(MAX_UINT) and divide both the numerator and denominator by the factor you need to reduce it to less than MAX_UINT.

?

B

chown33
Jun 21, 2011, 12:40 AM
I think you should just throw more bits at it.

Carry out the calculation using the next longer type, and then reduce. I think the C99 typedef is uint64_t.

MorphingDragon
Jun 21, 2011, 12:59 AM
I think you should just throw more bits at it.

Carry out the calculation using the next longer type, and then reduce. I think the C99 typedef is uint64_t.

Thats not always a solution though. If a reduced component is still greater than UINT_MAX, you still have to deal with reducing the accuracy.


Spitballing.

Look for max(a,b,c,d) >sqrt(MAX_UINT) and divide both the numerator and denominator by the factor you need to reduce it to less than MAX_UINT.

?

B

That might work... Certainly a lot faster than my original idea.

Sander
Jun 21, 2011, 03:53 AM
You want to do this because you are going to multiply those fractions?

chown33
Jun 21, 2011, 10:36 AM
Thats not always a solution though. If a reduced component is still greater than UINT_MAX, you still have to deal with reducing the accuracy.

If by "reducing the accuracy" you mean "reducing the fraction", then the GCD algorithm works no matter what.
http://en.wikipedia.org/wiki/Binary_GCD_algorithm

Reducing a fraction does not reduce its accuracy. 3/9 is exactly as accurate as 1/3, but only the latter is a reduced fraction.
http://en.wikipedia.org/wiki/Irreducible_fraction

MorphingDragon
Jun 22, 2011, 02:32 AM
If by "reducing the accuracy" you mean "reducing the fraction", then the GCD algorithm works no matter what.
http://en.wikipedia.org/wiki/Binary_GCD_algorithm

Reducing a fraction does not reduce its accuracy. 3/9 is exactly as accurate as 1/3, but only the latter is a reduced fraction.
http://en.wikipedia.org/wiki/Irreducible_fraction

Reduce the actual accuracy of the numbers the fractions represent, not just simplify the fraction.

(1359 / 2750) 0.5463 -> (11 / 20) 0.55

I already have a method of doing it, but it takes a long time to compute and I was wondering if there was a known way that's most likely a lot quicker.


You want to do this because you are going to multiply those fractions?

Yes, I'm working on a Uni project where there's lots of lovely numbers like 1/3 is involved and there's no other way than fractions if you want maximum accuracy. Working in a minimum unit isn't really appropriate for the purpose of the project.

It would be bad programming practice to not account for the limitation of the CPU and allow an overflow exception to occur during operations on the fractions. You could flag the fraction and abort the operation. A much more desirable action would be to reduce the accuracy of the fraction so you stay within the limits of the CPU during the operation.

Sander
Jun 22, 2011, 03:20 AM
Yes, I'm working on a Uni project where there's lots of lovely numbers like 1/3 is involved and there's no other way than fractions if you want maximum accuracy. Working in a minimum unit isn't really appropriate for the purpose of the project.

Would you still flag the fraction as having "reduced accuracy"? I'm asking because when you print out your final result as a fraction, a user might rightfully assume that this is an exact answer. If you're reducing accuracy anyway, you could consider simply switching to floating point arithmetic at that point.

balamw
Jun 22, 2011, 05:21 AM
A much more desirable action would be to reduce the accuracy of the fraction so you stay within the limits of the CPU during the operation.

Question: So, how is this different than just switching to floating point if there is an overflow? That would typically be more accurate than your "(1359 / 2750) 0.5463 -> (11 / 20) 0.55" example. As Sander says, you would still somehow have to indicate that precision was lost.

Also, have you considered using gmp http://gmplib.org/ or some other bignum/"arbitrary precision" library to maintain your precision... http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic

Finally, you say you have an approach. What is it? Can you describe it or post code. Maybe someone else has the same problem or idea, but will never know...

B

MorphingDragon
Jun 22, 2011, 10:24 AM
Question: So, how is this different than just switching to floating point if there is an overflow? That would typically be more accurate than your "(1359 / 2750) 0.5463 -> (11 / 20) 0.55" example. As Sander says, you would still somehow have to indicate that precision was lost.

That fraction was just an example of what I wanted to achieve, the library would theoretically have to be doing it on fractions like 3/2^31.

Floating point was our first idea, but the application does a lot of calculations involving Radians and tricks with fractions let us be accurate calculating the angles without actually knowing PI.

Also, have you considered using gmp http://gmplib.org/ or some other bignum/"arbitrary precision" library to maintain your precision... http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic

Now why the hell don't they tell you about these things at Uni? :D There's no point changing it now anyway, its due in on Friday and I already have it working. I just want to try improve it.

In reality, the application doesn't approach the limits. This is just to help satisfy "Good Software Engineering Practice" portion of the marking schedule, by making sure limits are accounted for in the design and testing of the fraction library.

Finally, you say you have an approach. What is it? Can you describe it or post code. Maybe someone else has the same problem or idea, but will never know...

B

I don't have the actual Code with me but I wouldn't mind posting it when I get back to Hamilton.

chown33
Jun 22, 2011, 11:20 AM
Floating point was our first idea, but the application does a lot of calculations involving Radians and tricks with fractions let us be accurate calculating the angles without actually knowing PI.

There's an old trick for embedded systems where angles are measured in pirads, i.e. the fraction of a semicircle. So a semicircle (180 degress, pi radians) is exactly 1.0 pirads, a quarter circle is exactly 0.5 pirads (90 degrees, pi/2 radians), etc. The range of fixed-point or floating-point values is usually chosen to be 0.n (pure fraction) or 1.n bits, where n is the desired precision. Both formats are pretty easy to work with on puny processors, and angles like 180, 90, 45 etc. are represented exactly.

balamw
Jun 22, 2011, 07:40 PM
Now why the hell don't they tell you about these things at Uni? :D

At least somewhere along the road you learned about overflows and that they should be dealt with. :p

As chown33 points out many of these problems have solutions in the embedded/fixed point world.

Another Wiki link for you to read: http://en.wikipedia.org/wiki/Fixed-point_arithmetic

B