This is more of a mathematics question than programming.
Say I have fractions a/b and c/d.
If say a * c > UINT_MAX or b * d > UINT_MAX, how could I reduce the accuracy of one or both the fractions without knowing the decimal value to fit inside UINT_MAX? This is assuming the fractions are already in their simplest form.
Say I have fractions a/b and c/d.
If say a * c > UINT_MAX or b * d > UINT_MAX, how could I reduce the accuracy of one or both the fractions without knowing the decimal value to fit inside UINT_MAX? This is assuming the fractions are already in their simplest form.