From the article:
That sounds a bit strange... Why would 64-bit be better graphically? The advantage of 64-bit is just more memory address space and larger integer operations. 32-bit is already enough to cover the required pixel space and colour space.
Also, animations usually aren't integer math and most of these animations are run by the GPU anyway.
It sounds like Apple is also including a special vector engine similar to AltiVec or SSE in this chip to increase animation performance, but that wouldn't necessarily be related to it being 64-bit.
You're absolutely right that it sounds strange.
Because the article's claims are completely ludicrous.
Adding special instructions to the CPU doesn't really help much.
You know what helps increase animation performance when your animations are layers composited on a GPU? Making a better GPU and giving it more memory bandwidth.
----------
Isn't refreshing RAM one of the biggest battery costs on the ideal system?
Would it be dumb to suggest they could use the 64bit address space to direct access the storage chips?
FusionIO can license them code to do this, LLVM compiler chain knows the life time of the object and could also flag if it is only needs to be read in to Memory what could be read direct from Storage. That could also speed up all of the above as well so that it seems like the machine has more RAM without the battery cost of more RAM.
Yes, it would be dumb specifically because the storage chips are NAND Flash.
Allowing for direct addressing of the NAND flash by the CPU (and of course encouraging people to use it) decreases overall performance for at least the following reasons:
1) computation of ECC and remapping is now done by your application instead of a dedicated controller. Obviously, more work done by your CPU that it didn't need to do before means you have computing power available. Even if you added special instructions and logic to accelerate this you'd still lose compared to having a dedicated controller for a whole 'nother variety of reasons stemming from just this point.
2) your CPU has to wait for the NAND; the extra time spent context switching as you wait, if you're not hung, means that you've got less computing power available before because you're spending time doing stuff that you didn't have to do before.
3) NAND is slow. Really slow. In fact, it's possible to get in a situation where writing to NAND is slower than writing to a crappy 5400rpm spinning disk. Inadvertently letting developers who don't understand this use NAND as working memory will be unbelievably bad. If you want to get a taste of this, get a JMF601-based SSD drive, write random bits across the entire thing, and then install your OS over it. It was be abysmally slow because it'll hit all the pain points of NAND.
Basically:
1) NAND is too slow.
2) Developers wouldn't know how to properly use it.
3) Dedicated NAND controller saved the CPU a lot of work and did it in parallel; getting rid of that means the CPU has to do the work.