Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The solution I came up with was to place the numbers into a basic c-array of integers. For example, the number 165 would be represented as x[5,6,1]. 5 goes in the 0th position, 6 in the 1st, and 1 in the 2nd. Any integer can be represented by breaking it down into its n * 10^x, with x being it's position in the array. I then did arithmetic (+, -, *, /) on each element between the two arrays carrying digits to the next element as needed. I am sure there are much more elegant (and memory friendly!) solutions to this problem, but using this methodology I was able to calculate very large integers (e.g. 300!) in a matter of a few seconds.

This solution worked well enough that I made my own Obj-C BigInteger class out of it (and rudimentary iPhone big integer calculator), passing it NSStrings representing the numbers, then converting them to c-arrays to do the calculations, then returning a NSString as the result.
Say, that gives me an idea as the point is to try to make calculations like humans rather than like itself. If you didn't understand it, people in grade school are often taught to do something like this:

24
X15
---
120
+240
----
360

where you must take each digit one at a time, multiply it, and then take the sum of all of those numbers. A computer, I think, just does it. However, you must instead store them as an index of an array or a linked list as you would overflow the variables type if it was an int and lose accuracy as a double.


However, how you did it confuses me. The question is how did you expand a basic array as more digits were needed? Would it be beneficial to, instead, use a linked list as you never know what the length of the number is or was this intentional to prevent complications?
 
Last edited:
The first number is from a power(2^1000) and the second from a factorial(100!). And yes, it'll still fail with a long int. While a double can store a number, you can't solve the problem because a double rounds after a certain number and the numbers that I need to work on are way above that limit.

It won't help with the 100! problem, but if you're using typical hardware that supports IEEE 754 float point numbers, a double will suffice for the first problem. Since the exponent fits within a double's exponent range and the 2 is readily stored in a double's binary significand, the exact value can be stored and/or printed:

#include <stdio.h>
#include <math.h>
int main(void) { printf("%.0f\n", pow(2, 1000)); return 0; }
 
However, how you did it confuses me. The question is how did you expand a basic array as more digits were needed? Would it be beneficial to, instead, use a linked list as you never know what the length of the number is or was this intentional to prevent complications?

It's an one approach that would work, but your memory consumption per object and the performance of traversing the linked list may bite you.

The usual approach for using arrays of indeterminate size is to allocate an array of sufficient size (arr_size) for the most typical case (arr) and have add a variable that stores how many elements of the array are actually in use (arr_used).

Always arr_used is < arr_size and arr is an array of arr_size.

arr_size starts out as a number that's big enough not to use up too much memory (think of potentially having 1000's of your object), but is big enough to handle the majority case without having to reallocate a new array (see later). arr_used starts as zero.

If the number overflows into needing an extra byte, you increase arr_used and start using the next element of arr. So at any given time, only elements from 0 up to (arr_used - 1) are significant in arr. Elements arr_used up to (arr_size - 1) are ignored as being junk.

If arr_used becomes >= arr_size, increase arr_size. Allocate a new larger array of size arr_size. Copy arr into the beginning of the new array. Deallocate arr. Assign the new array to arr. You don't increase arrr_size by just 1, instead you increase enough to minimise the likelihood of having to reallocate arr again in the near future. This often called the growth factor.

Optionally, if a most-significant byte becomes zero, you decrease arr_used. Again optionally, if arr_used decreases to some threshold (often a percentage of arr_size), you decrease arr_size to arr_used, allocate a new smaller arrya of size arr_size. Copy elements 0 to (arr_size - 1) of arr into the new array. Deallocate arr. Assign the new array to arr.

This is all a really common technique for implementing dynamic arrays in C in general. It'll be described and the code will be spelled out in detail in any C book.

There's no magic numbers for the initial value of arr_size, the growth factor, or the threshold when you decrease arr_size. You need to profile real-world code to find which numbers work for you.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.