Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I wasn't comparing myself to the engineers who design these things, but you don't need that kind of knowledge to know what kind of performance you'll get from them, and the break down of how it all works.
We have to let this one go and agree to disagree with each other.

It absolutely does, if the iPad 3 only had an SGX543MP2 it'd be running Infinity Blade 2 (at ~1430x1050) at about 17 FPS, which is very noticeable -- the extra two SGX543's allow for it to maintain normal performance.
Unfortunately the iPad 3 (or the iPad 2) isn't set up that way so you're only speculating on something you nor anyone can prove.

I'd suggest you read the part of Anandtech's review of the iPad 3 about the A5X and memory, but from what I remember, Apple's moved the memory interfaces closer to the GPU, and they've added two of them. So they've now got a 128 bit memory interface (4 x 32 bit) compared to 64 bit (2 x 32 bit) for the iPad 2.
Read it, I still don't see how that equates to double the performance of the iPad 2. On paper it looks like it should however how the new changes interact with the sum of all the rest of the components and software will ultimately determine the overall (real) performance improvements. The CPU/GPU alone does not dictate the overall results.

If you don't understand where I'm going with this, I'm only concerned with real results in real world, not benchmarks and theoretical test result numbers. Most consumers aren't concerned about how much faster X product is over Y if they can't/don't notice it. I can tell you that hardly any iPad 3 owner I'm aware of bought the device because they noticed double the graphics speed of the iPad 2. Nobody picks up the iPad 3 and says "wow this thing is so fast it's easily twice as fast as the iPad 2".
 
The A5X is just a necessity to support the new display, being that it has twice the graphics cores than the A5 doesn't mean it has double the graphics power and the benchmarks prove it.

If you don't understand where I'm going with this, I'm only concerned with real results in real world, not benchmarks and theoretical test result numbers.

These two things you're saying are at odds. First you say benchmarks prove that the A5X (NOT the iPad 3, but the A5X powering it) is not twice as powerful as the A5, regardless of the fact that benchmarks do say that the A5X is twice as powerful and now you're saying you're only talking about real world results.

The A5X is twice as fast in many operations when on a level playing field (resolution) with the A5. However, the iPad 3 doesn't ship with the same resolution so performance is obviously not double the iPad 2.
 
While the technical discussion is interesting, the overall theme of this thread is pathetic. I have a android phone and a new ipad. They are very different beasts. I do not like apple's business model but the iPad currently meets my needs.

Go use your iPad and be happy that it does what you want it to do and does it well. If people are happy with their Prime does that make your iPad lesser?
 
While the technical discussion is interesting, the overall theme of this thread is pathetic. I have a android phone and a new ipad. They are very different beasts. I do not like apple's business model but the iPad currently meets my needs.

Go use your iPad and be happy that it does what you want it to do and does it well. If people are happy with their Prime does that make your iPad lesser?

What do you expect when someone transcribes some comments from an Android forum that sound like they were written by an insecure, energy drink chugging, kid?

The tone of the original material was annoying regardless of what it was written about. It's the exact reason I gave up on most PC enthusiast forums many years ago. On most of them if you aren't using the latest hacked Droid device, or rolling with a uber overclocked $150 CPU using Linux you're an idiot, etc.
 
We have to let this one go and agree to disagree with each other.
Very well.
Unfortunately the iPad 3 (or the iPad 2) isn't set up that way so you're only speculating on something you nor anyone can prove.
What do you mean it isn't set up that way?
Read it, I still don't see how that equates to double the performance of the iPad 2. On paper it looks like it should however how the new changes interact with the sum of all the rest of the components and software will ultimately determine the overall (real) performance improvements. The CPU/GPU alone does not dictate the overall results.

If you don't understand where I'm going with this, I'm only concerned with real results in real world, not benchmarks and theoretical test result numbers. Most consumers aren't concerned about how much faster X product is over Y if they can't/don't notice it. I can tell you that hardly any iPad 3 owner I'm aware of bought the device because they noticed double the graphics speed of the iPad 2. Nobody picks up the iPad 3 and says "wow this thing is so fast it's easily twice as fast as the iPad 2".
The PowerVR SGX543MP4 is twice as powerful as the PowerVR SGX543MP2. The former is powering a screen resolution which is twice that of the latter, and in some games, like Real Racing 2 or Modern Combat 3, it'll be more than enough to power them at 2048x1536 smoothly, but in others like Infinity Blade 2 which have heavy shaders, will have to run at about 1536x1152, which is exactly double the pixels and the performance you'd expect (on paper) from adding another two SGX543's.

I'm not saying the iPad 3 is twice as fast as the iPad 2, in fact it's slower (in certain games) if you take into account the higher resolution, all I'm saying is that the reason the iPad 3 wasn't showing a two fold improvement in GLBenchmark over the iPad 2, at the same resolution, is because it's bottlenecked. At 2048x1536, you won't see the same thing.
 
What do you mean it isn't set up that way?

Please refer to your statement that you posted regarding my response. You commented on a theoretical setup stating

"if the iPad 3 only had an SGX543MP2 it'd be running Infinity Blade 2 (at ~1430x1050) at about 17 FPS..."

I'm only interested in data we can test and confirm in real world, as I'm certain I've clarified, I'm not a bit interested in theoretical performance.

The PowerVR SGX543MP4 is twice as powerful as the PowerVR SGX543MP2. The former is powering a screen resolution which is twice that of the latter, and in some games, like Real Racing 2 or Modern Combat 3, it'll be more than enough to power them at 2048x1536 smoothly, but in others like Infinity Blade 2 which have heavy shaders, will have to run at about 1536x1152, which is exactly double the pixels and the performance you'd expect (on paper) from adding another two SGX543's.
Just looking at the GPU's where 4 cores > 2 cores where each core is identically the same, mathematically in that regard you're correct, but that's not what I'm talking about nor concerned about. The result of the cores' performance in conjunction with other necessary hardware and software, at all levels, are what determines the actual resulting performance. The GPU cores alone don't do all the work required for the user to experience graphics, I hope I'm being clear about this.

I'm not saying the iPad 3 is twice as fast is the iPad 2, in fact it's slower (in certain games) if you take into account the higher resolution, all I'm saying is that the reason the iPad 3 wasn't showing a two fold improvement in GLBenchmark over the iPad 2, at the same resolution, is because it's bottlenecked. At 2048x1536, you won't see the same thing.

Bingo. You had it right then went back into the irrelevancy of relying on benchmarks to tell the whole story.

Theoretically, there's no reason why all games shouldn't perform twice as fast on the iPad 3 after all it has twice the graphics cores as the iPad 2, however as you indicated that's simply not the case, there's other factors involved that helps to determine overall real-world performance.

GLBenchmark, as I stated much earlier, is just a specific set if circumstances packaged into a single app. It's not the end-all of discussions as it doesn't tell anyone everything they need to know. You say it's being bottlenecked but I'm saying you haven't provided any data as to how you formed that conjecture. I'm with you if all you're saying is that performance isn't likely at the full potential of what all 4 VR cores can provide, but how can you say it's being bottlenecked without showing any proof that it is?

I'll make this very clear. Nobody I know of buys an iPad to run GLBenchmark as their primary purpose. Using your example of playing Infinity Blade 2, I can better accept someone buying an iPad just to play that game. So is the user concerned more about the performance of his/her iPad in GLBenchmark or is he/she more concerned with the actual performance/experience while playing Infinity Blade 2?

A software developer is unqualified to make that determination because you don't have enough of an understanding of what's really going on within the hardware and/or the combination of hardware and software.
 
Last edited:
Please refer to your statement that you posted regarding my response. You commented on a theoretical setup stating

"if the iPad 3 only had an SGX543MP2 it'd be running Infinity Blade 2 (at ~1430x1050) at about 17 FPS..."

I'm only interested in data we can test and confirm in real world, as I'm certain I've clarified, I'm not a bit interested in theoretical performance.


Just looking at the GPU's where 4 cores > 2 cores where each core is identically the same, mathematically in that regard you're correct, but that's not what I'm talking about nor concerned about. The result of the cores' performance in conjunction with other necessary hardware and software, at all levels, are what determines the actual resulting performance. The GPU cores alone don't do all the work required for the user to experience graphics, I hope I'm being clear about this.



Bingo. You had it right then went back into the irrelevancy of relying on benchmarks to tell the whole story.

Theoretically, there's no reason why all games shouldn't perform twice as fast on the iPad 3 after all it has twice the graphics cores as the iPad 2, however as you indicated that's simply not the case, there's other factors involved that helps to determine overall real-world performance.

GLBenchmark, as I stated much earlier, is just a specific set if circumstances packaged into a single app. It's not the end-all of discussions as it doesn't tell anyone everything they need to know. You say it's being bottlenecked but I'm saying you haven't provided any data as to how you formed that conjecture. I'm with you if all you're saying is that performance isn't likely at the full potential of what all 4 VR cores can provide, but how can you say it's being bottlenecked without showing any proof that it is?

I'll make this very clear. Nobody I know of buys an iPad to run GLBenchmark as their primary purpose. Using your example of playing Infinity Blade 2, I can better accept someone buying an iPad just to play that game. So is the user concerned more about the performance of his/her iPad in GLBenchmark or is he/she more concerned with the actual performance/experience while playing Infinity Blade 2?

A software developer is unqualified to make that determination because you don't have enough of an understanding of what's really going on within the hardware and/or the combination of hardware and software.
As you say, there are a lot of other factors that go into it, such as memory bandwidth, but I'm assuming Apple has done what is necessary to keep such other aspects from bottlenecking the SGX543MP4. E.G, if two 32-bit interfaces were enough for an SGX543MP2, then four 32-bit interfaces should be enough for an SGX543MP4, as it's exactly twice the power of the SGX543MP2.

As an example of the PowerVR SGX543MP4 not being bottlenecked by these other aspects of the device you speak of, I'd cite the resolution independent low level GLBenchmark tests (such as fill rate), which show the PowerVR SGX543MP4 performing twice as well as the PowerVR SGX543MP2. I would expect these to be bottlenecked by memory bandwidth or other factors, if it was the case in an actual game scenario.

I say it's very likely it's being bottlenecked. GLBenchmark 2.5 has been released though, it's an all new suite and it runs at 1080p and it uses tests with a lot better graphics -- so we may see if what I say is true, sooner or later.

If it isn't clear what I'm saying, it's this: the PowerVR SGX543MP4 is twice as powerful as the SGX543MP2, but unlikely at low resolutions. Just look at game benchmarks with desktop cards, at lower resolutions the tests become more CPU bound than GPU bound, and the same should be true of this. For example, the 1GHz dual-core Cortex A9 may not be powerful enough to allow the GPU to be fully utilised at 1280x720, i.e, with around 400+ FPS, but at 2048x1536 where you're looking at 30-90 FPS, it'll handle it just fine.
 
Last edited:
As you say, there are a lot of other factors that go into it, such as memory bandwidth, but I'm assuming Apple has done what is necessary to keep such other aspects from bottlenecking the SGX543MP4. E.G, if two 32-bit interfaces were enough for an SGX543MP2, then four 32-bit interfaces should be enough for an SGX543MP4, as it's exactly twice the power of the SGX543MP2.
Glad we understand each other that overall performance is the sum of many other factors. However it's the assumption I'm not comfortable with if you know what I mean. I can't simply accept that Apple has done everything to ensure maximum performance without more information.

For example can a software update to the iPad improve performance? Do software devs have the best tools available to them to produce apps that can maximize the iPad's visual potential? If the answers are "no" and "yes" respectedly, then I retract my statements.

As an example of the PowerVR SGX543MP4 not being bottlenecked by these other aspects of the device you speak of, I'd cite the resolution independent low level GLBenchmark tests (such as fill rate), which show the PowerVR SGX543MP4 performing twice as well as the PowerVR SGX543MP2. I would expect these to be bottlenecked by memory bandwidth or other factors, if it was the case in an actual game scenario.
Fair enough.

I say it's very likely it's being bottlenecked. GLBenchmark 2.5 has been released though, it's an all new suite and it runs at 1080p and it uses tests with a lot better graphics -- so we may see if what I say is true, sooner or later.
I see where you're coming from however only time will tell. The term "bottleneck" is a rather vague description as it really doesn't help to identify where the problem may be. For example all the hardware may be matched perfect with no choke points in regards to data bandwidth from the component with the highest data throughput to the least, however if the software's written poorly, it may result in symptoms similar to a hardware bottleneck.

I do agree with all your comments regarding RAM bandwidth issues, fill rate concerns and how they relate to the type of graphics being displayed at a given resolution, however when you take into the probablility of how many combinations of problems could be describing what you describe as "bottlenecking" it could be more than a dozen easily.

If it isn't clear what I'm saying, it's this: the PowerVR SGX543MP4 is twice as powerful as the SGX543MP2...
I'll stop you there and state that there's no argument from me in that regards.

Just look at game benchmarks with desktop cards, at lower resolutions the tests become more CPU bound than GPU bound, and the same should be true of this. For example, the 1GHz dual-core Cortex A9 may not be powerful enough to allow the GPU to be fully utilised at 1280x720, i.e, with around 400+ FPS, but at 2048x1536 where you're looking at 30-90 FPS, it'll handle it just fine.

On desktop machines I'm in agreement 100% regarding the relationship between performance at a given resolution in relation to CPU vs GPU activity.

Getting back to the thread topic, my point is that benchmarks doesn't always reveal the most meaningful information. In a test it may show that the A5X is slower than the A5 (not by much) but still a measureable amount in a test app. However a user actually using both may not notice performance differences and to me that's more valueable.

The Tegra 3 is a 4 core CPU but does it behave like one and what kind of applications (or situations) fully capitalize on the advantage, if any, of having that sort of potential?
 
Glad we understand each other that overall performance is the sum of many other factors. However it's the assumption I'm not comfortable with if you know what I mean. I can't simply accept that Apple has done everything to ensure maximum performance without more information.
Fair enough.
For example can a software update to the iPad improve performance? Do software devs have the best tools available to them to produce apps that can maximize the iPad's visual potential? If the answers are "no" and "yes" respectedly, then I retract my statements.
Regarding the former, and specifically graphics performance, I'd have to say not really, no. And regarding the latter, yes.
I see where you're coming from however only time will tell. The term "bottleneck" is a rather vague description as it really doesn't help to identify where the problem may be. For example all the hardware may be matched perfect with no choke points in regards to data bandwidth from the component with the highest data throughput to the least, however if the software's written poorly, it may result in symptoms similar to a hardware bottleneck.
That's true, but I really doubt that Apple would invest in substantially better hardware but fail to provide adequate software to handle it.

I'm not sure how familiar you are with the PowerVR SGX543MP series, but unlike SLI or CF on the desktop, the SGX543's actually handle all that stuff themselves, so from a developers perspective there's no difference between an SGX543MP2 or MP16. I'd say that should be true of Apple too, iOS shouldn't need any update to effectively use the PowerVR SGX543MP4, the GPUs act as one.
I do agree with all your comments regarding RAM bandwidth issues, fill rate concerns and how they relate to the type of graphics being displayed at a given resolution, however when you take into the probablility of how many combinations of problems could be describing what you describe as "bottlenecking" it could be more than a dozen easily.
That's true, I'd expect the bottlenecking is the processor though, as it is quite weak comparatively.
Getting back to the thread topic, my point is that benchmarks doesn't always reveal the most meaningful information. In a test it may show that the A5X is slower than the A5 (not by much) but still a measureable amount in a test app. However a user actually using both may not notice performance differences and to me that's more valueable.
Oh yeah I know. Benchmarks are better than nothing though, what I'd really like to see is the UDK updated to run at the iPad 3's native resolution, that'd be a very accurate benchmark.
The Tegra 3 is a 4 core CPU but does it behave like one and what kind of applications (or situations) fully capitalize on the advantage, if any, of having that sort of potential?[/QUOTE]
I'd say it's only games which take advantage of the CPU, and few of them at that.
 
Would it be a fair assumption to say a dual core at higher clock (eg 1.5ghz) would benefit an iPad more then a lower clocked quad core?

As they do in regular gaming pcs....
 
Would it be a fair assumption to say a dual core at higher clock (eg 1.5ghz) would benefit an iPad more then a lower clocked quad core?

As they do in regular gaming pcs....
Yes and no. The switch to the Cortex A15 architecture will bring significant performance improvements, even only a dual-core at current speeds. But a quad-core will be very useful in games, and heavily multi-threaded apps.
 
Had the asus transformer prime and honestly if it wasn't for the wifi issue i would still have it. I don't fancy benchmarking and so in real world terms both were really fast but android is still a little laggy but nothing to really complain about imo.

4:3 aspect ratio sucks for watching movies but great for books and comics. It is a wash for me.

As for games, i would consider myself a hardcore gamer and wouldn't really game on either machines. Though if i would have to choose one it would be the transformer prime. Mainly due to the fact that i can hook up a ps3 controller to it and emulation on android devices are way better than on ios devices. Fpse runs flawlessly on my single core phone and no root (jailbreak) is needed.

And a lot of android guys know the sgx543mp4+ is the best mobile gpu out right now, and it crazy why nvidia doesn't switch to powerVR.
 
Must - remember- this - one - !!1!one!

Yeah, it does put things into perspective.

Nothing will come of this "discussion" because people will say and believe whatever they want to.

There is a famous quote from Isaac Asimov that ends with: "my ignorance is just as good as your knowledge."

Even though it is taken out of context I still think it applies to the majority of discussions on the internet.
 
I actually had a revelation about this topic last night. I was sitting on the couch watching the Cubs lose and screwing around on my iPad, and I was thinking about why I love it so much, and why I would never use anything but the iPad for a tablet. I have never been fond of laptops for portability, so I love the form factor, but there are zillions of tablets with the same form factor. It has the retina display, which I think is a gorgeous screen, but I would have bought it even without that new display. It has souped up innards, but again, so do a lot of other tablets. I don't perceive it to be any faster than my iPad 2, at least for the stuff I personally use it for.

It's not about the screen, the processor, the RAM, or the design. It's 100% about the software and the way it's designed for the iPad. I use a huge variety of software ranging from kids books to games to productivity to other various utilities. I know that I would not get the same experience with these apps on an Android device. Google is quickly losing control over the Android eco system. It's not unified, and it's not nearly as profitable for developers, therefore it's not as good of an experience for the end user. Of course, an Android user will probably down vote me for saying that and retort that his/her user experience is just fine on Android, but when people say that it becomes very obvious that they haven't tried iOS on the iPad extensively enough to know any better.
 
It seems some people expect to see things running 2x faster on the iPad 3. That's all kinds of wrong - it'd imply that you can see an iPad 2 running at half speed. It's not like the iPad 2 runs slowly - it's generally perfectly smooth and fluid. You're not going to get twice as fluid as fluid.

The way it works is more like this: First you release the product. Maybe it's a bit slow, like the iPhone 2g/3g or the iPad 1. First job: make it fast enough to run the majority of stuff perfectly. They hit that with the iPad 2. There's no point making it "faster" now, so instead you increase quality while maintaining speed. In this case, they've doubled display quality while maintaining speed. You should be looking for a quality increase, not a speed increase.

Their next job might be to decrease weight while maintaining speed, or it might be to increase app quality in which case a faster cpu/gpu might be in order.

Oh and by the way - developers *do not* have the best tools for creating anything heavy on the graphics side. We have pretty good tools, but I've seen *way* better profilers, there's plenty that the chip can do that we don't really have access to, and there's zero chance of any "bare metal" coding because everything has to go through apple's layers of APIs and openGL ES. Expect to see far better graphics on PS Vita games because of these reasons, even allowing for the much lower res screen and hardware differences. An iPad with the vita's screen wouldn't compete with it, in other words.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.