Time and memory space are the two main constraints on what we can compute. Some problems require lots of memory, some lots of time, and many demand a lot of both. Studying these constraints is the domain of computational complexity researchers, who refer to time as the number of steps a computer takes to do a certain task, and space as the number of memory slots the task requires.
Intuitively these values are linked, because if a task requires X steps, in the worst case scenario where the computer needs to access its memory for every step, it will require X memory slots.
But researchers have been able to lower the bar for the amount of memory needed in this worst case scenario. In the 1970s it was discovered that, in fact, any computation that takes X steps could be done with X/log X of memory. So a program that took 100 time steps, for instance, could always run within 50 memory slots, as log 100 is equal to 2.
“That’s the best that we’ve known until last week,” says Lance Fortnow at the Illinois Institute of Technology. But then Williams released a surprising paper that showed that this can be reduced dramatically – to the square root of X log X. Instead of 100 or 50 memory slots, a 100-step problem could actually be reduced to 15 slots.
“It was kind of a shocker when Ryan sent this paper around last week, and we were all like, ‘wow’,” says Fortnow.
The finding sounds unlikely because it means that computers seem to need only enough space to hold a small part of a problem in memory – a bit like humans being able to solve a complex, multi-step maths problem without the need to write everything down, relying only on our limited short-term memory.
Williams’s approach hinges on what is known as the tree evaluation problem. This involves a series of linked calculations in a branching tree-shaped structure, where calculating the final result at the “root” of the tree first involves calculating the “leaves”, then “branches”, and so on. Recent advances in solving the tree evaluation problem have shown that it is possible to do so with an algorithm that is able to re-use computer memory that is already full – itself a completely unexpected discovery.
But while the magnitude of the discovery has shocked computer scientists, it won't necessarily change the way we use computers. The problem is that the finding shows that while you can shrink the amount of memory required to perform a calculation, it won't decrease the time taken. Computer memory is fairly cheap and readily available, so reducing the amount we need isn't a priority.
A discovery that allowed the reverse would mean we could add more memory to computers and speed-up computation as a result - something that would be very helpful as advances in processor speed have begun to slow down - but whether this would be possible is unclear. "Now that we know time-efficient algorithms can be made space-efficient, we can look for trade-offs which are pretty good for both time and space at the same time, and that's useful in a real sense," says Mertz.
Fortnow says he sees no immediate practical implications for the work, but points out that it does provide hope that more surprises in computational complexity could still come and that they might shake-up how we solve hard problems. "You're shocked once, you can be shocked again," he says.