Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

hchung

macrumors 6502a
Oct 2, 2008
689
1
Wii: 729MHz IBM PowerPC CPU (1 core), 243MHz "Hollywood" GPU
PS3: 3.2GHz CPU (1 core I think), 550MHz NVIDIA GPU
XBOX 360: 3.2GHz Tri-Core PowerPC CPU, 500MHz ATI GPU

I think the GPU in the iPhone 5 might be better than these, but I hardly know anything about GPUs. The PowerPC processor in the 360 is interesting since you might be able to run Mac OS on it :D

Xbox 360 development was actually done on G5s. The Wii is basically a G3 and something along the lines of a Radeon R400/R500.

Running Mac OS wouldn't happen though because there's no support for the northbridge/southbridge components. Somebody'd have to do a ton of coding to get that sorted out.
 

hchung

macrumors 6502a
Oct 2, 2008
689
1
ok.... just like anything then ... why not do it coz its possible.

no reason at all ? I would have thought their must be something.. otherwise why wouldn't Apple just have used quad core instead.... because they can.. Theres more power there too ... battery reasons possibly ?

Doing something because it's possible is an awful idea especially when it's all downside and no upside.

+1 core gets you:
1) higher power consumption.
2) larger die (less chips are made per wafer)
3) lower chip yields (more chips manufactured arrive dead on arrival)
4) harder to layout (takes longer to get to market)
5) more performance.

The only thing positive is more performance. Which, for a screen of that size gets you:
1) bragging rights
2) even more ridiculously high framerates that no LCD can display.

I'm guessing everybody else here will agree with me when I say that bragging rights is not a good reason to make things more expensive, delay production even more, make design harder, and have worse battery life.

The performance target for a system like this should never be benchmarks. It should be, "at what point do I reasonably max out all the other components in the pipeline."

----------

I see your point. I was off topic. I was referring to the CPU which contains the GPU Cores. :)

SoC.

The SoC contains the GPU and CPU. The CPU doesn't contain the GPU.

Nit-picky, I know. But I gotta say it for the other hundreds of people who get it wrong too.
 

cscheat

macrumors member
Sep 16, 2010
52
0
3 ?

total-recall-kaitlyn-leeb_240x320.jpg
 

nick_elt

macrumors 68000
Oct 28, 2011
1,578
0
No. First time I got 900+, later I got 1630 and today again in the 900's. Pretty simple and I've got no reason to lie about it.

I read somewhere that they have actually found the a6 to range between 800-1200mhz, not just stay at 1ghz. I guess whatever the phone was doing at the time has something to do with it ( Im really no expert but I found the article interesting)
 

diamond.g

macrumors G4
Mar 20, 2007
11,112
2,444
OBX
Peak memory bandwidth will limit the GPU fill rate in unified memory system architectures. The iPhone 5 appears to have significantly more peak memory bandwidth.
I didn't think TBDR's were as sensitive to that compared to IMR's.
 

iEvolution

macrumors 65816
Jul 11, 2008
1,432
2
Honestly would never have thought Apple would end up developing top of the line processors.

4 years ago if someone were to tell me this I'd think they're crazy.

Now we need to get Apple to create processors for desktop/laptop computers.
 

MacinDoc

macrumors 68020
Mar 22, 2004
2,268
11
The Great White North
Honestly would never have thought Apple would end up developing top of the line processors.

4 years ago if someone were to tell me this I'd think they're crazy.

Now we need to get Apple to create processors for desktop/laptop computers.
Well, Apple was one of the partners that developed the PowerPC architecture. But it's a long way from being able to compete with Intel on laptop or desktop class processors. Which is probably why Apple made the massive switch from PowerPC to Intel.
 

diamond.g

macrumors G4
Mar 20, 2007
11,112
2,444
OBX
TBDR's may require a certain amount of scene depth complexity to gain any significant advantages under constrained memory bandwidth limits.

True, I always thought that lots of overdraw would help show the strength of TDBR. Most major engine nowadays seems to reduce the amount of overdraw being performed. I remember this being a big deal due to how draw calls were handled with DX. I don't remember it being that big of a deal with OpenGL (or maybe I have the two reversed).

EDIT: Based on Anandtechs Performance Preview it appears that The A6 should be bandwidth constrained versus the A5X. Yet the A6 is posting faster/identical scores (in some of the offscreen tests).
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.