Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Wirelessly posted (Mozilla/5.0 (iPhone; CPU iPhone OS 5_0_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9A405 Safari/7534.48.3)

Intel really needs to step up it's work on low power phone/tablet processors. With 22nm and trigate their manufacturing processes are several ahead. Heck with AMD working in this direction ARM is going to have some difficult competition in a couple years.
 
Wirelessly posted (Mozilla/5.0 (iPhone; CPU iPhone OS 5_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9B179 Safari/7534.48.3)

If the A5x is a bigger chipset, it likely won't go in iPhone 5. Is Apple planning on sticking with the A5 in iPhone 5? I wonder...

Maybe theyre shrinking A5x down to 28MM in time for iPhone 5. It seems sensible pioneering new architecture on the iPad and then scaling it down for the phone.
 
What exactly is the problem with big die then?

For the unhappy geeks it's not really about the big die but the fact Apple didn't use the latest technology - 32nm or 28nm process - that wasn't available back when iPad components were being made. ;)

The challenge for Apple is not only to make a tablet with up-to-date components and affordable pricing, but also they have to make millions and millions of them quicker than any other company in the world. Otherwise it becomes a conspiracy theory - "Apple artificially lowers production to grab attention"
 
The iPad 3 has a larger battery than the iPad 2 (heavier, thicker, will take 2x as long to charge)

err... no, it will take the same amount of time assuming they doubled the charge current... which they surely did as they manufactures their own ac to dc blocks...
 
err... no, it will take the same amount of time assuming they doubled the charge current... which they surely did as they manufactures their own ac to dc blocks...

This is limited by the USB 2.0 spec. Maybe it will charge faster if you plug it into the wall though.
 
Correct me if I’m wrong, but the iOS system libraries have access to OpenCL which uses the GPU to offload certain processes, so a quad-core GPU is a much bigger deal than just rendering 3D graphics and powering the retina display.

As has been stated ad naseum, Apple’s ability to get the industrial designers, interface designers, cpu & gpu engineers and software people all in the same room to hash out problems is going to produce real world advantages that belie specs and confound competitors. Nerds will moan, no one else will.
 
Pixel Per inch is double.

----------



No app currently uses the A5 to its fullest, what makes ou think there will be one some time soon?

Hell, even a single core A4 hasn't been choked. There is still CPU work power left.

HAH!!!! Why do you think some apps have progress bars or why do you think particle demos you have 10,000 particles rather than 100,000 or millions? Because the CPU and GPU can't handle them. The video processing apps have long wait times to render the movies. Augmented reality apps are only doing so much yet could be doing so much more if they have more CPU and GPU power.

<walks off muttering to himself: "no app uses the A5 to its fullest my @$$. no app...dumb...m"....">
 
Put an i7 in an iPad for $499 and I will pre-order the **** out of it :rolleyes:

you would really fit in at apple design. I hear they are looking for people like you.

----------

They might be about the same size, but the A5X will probably pull ~2w at full load, while the Ivy Bridge i7s have a TDP of 77w :)

and that TDP also creates an ocean boiling heat. Water cooled Tablet anyone?
 
Because the CPU and GPU can't handle them. The video processing apps have long wait times to render the movies.Augmented reality apps are only doing so much yet could be doing so much more if they have more CPU and GPU power.

But that's precisely why there was no CPU upgrade. Apple could've either waited half a year to release the iPad when the new chips are ready, boosted the CPU at the expense of GPU, or upgraded the GPU. They chose the last one.

It would've been impossible for them to upgrade everything in time for the March release and the target price. I'm actually fairly impressed Apple managed to pack in 70% more battery, a larger chip and a higher resolution display at the same price.
 
For the unhappy geeks it's not really about the big die but the fact Apple didn't use the latest technology - 32nm or 28nm process - that wasn't available back when iPad components were being made. ;)

January?

----------

This is limited by the USB 2.0 spec. Maybe it will charge faster if you plug it into the wall though.

The iPad doesn't charge when plugged into a USB port. The iPad doesn't charge based off USB spec, it just uses a USB cable to do so.

----------

err... no, it will take the same amount of time assuming they doubled the charge current... which they surely did as they manufactures their own ac to dc blocks...

The new iPad takes "hours longer to charge," according to review sites.
 
Power usage and heat. The iPad is a mobile device so weight / battery life are of paramount importance. The iPad 3 has a larger battery than the iPad 2 (heavier, thicker, will take 2x as long to charge); meanwhile, the big A5X die will likely get quite hot (notice the heat spreader in the teardown?); this heat will spread to the Li-Ion battery and decrease its lifespan (probably only by a few percent, but still...).

We have absolutely no idea of the state of Samsung's 32nm/28nm process. Currently Samsung has not released any SoCs based on smaller process nodes, so we could be months away from stock levels that e.g. Apple needs. Apple has troubles keeping up with the demand anyway so why make the situation worse by utilizing a brand new SOI with low yields? Apple makes better profits from selling a device based on older SOI than selling out of stock signs.
 
We have absolutely no idea of the state of Samsung's 32nm/28nm process. Currently Samsung has not released any SoCs based on smaller process nodes, so we could be months away from stock levels that e.g. Apple needs. Apple has troubles keeping up with the demand anyway so why make the situation worse by utilizing a brand new SOI with low yields? Apple makes better profits from selling a device based on older SOI than selling out of stock signs.

Precisely. Same reason they are using a 45nm LTE chip from Qualcomm. No chance TSMC can provide 28nm volume for the 9615, even if it was already out.

In fact, if the process continues to have troubles, quantity for next iPhone launch could be an issue.
 
Someone in the AnandTech comments mentioned the A5X die is about the same size as the upcoming 22nm quad core Ivy Bridge processors. If this isn't evidence that Apple messed up by using a 45nm process (vs. waiting 2 months and moving to 32nm), I don't know what is.

Except that it's not 2 months is it? Apple have been making iPad's since the first week of January to have enough ready to ship in mid-March. Even assuming that a 32nm process works and can produce the vast amount of chips required right out of the gate you'd be looking at delaying the launch until, what, July at the earliest? Longer if you assume parts were being made before that assembly time.

And for what? No, really, what benefit does Apple get by doing that? A15 and PowerVR Series 6 wouldn't really be ready by then so the only benefit may be to go to a quad core A9 which a) would bring questionable benefits to the iPad anyway (how many apps really NEED quad core CPU's?) and b) would still get spanked by the A15 designs which would then have been only a few months away from launching. The third gen iPad is hardly a slouch and feels just as quick in day to day use as the iPad 2. No-one outside of spec geeks cares about the CPU and all Apple would have done is hand the advantage to Android as someone, most likely Asus or Samsung, would have got a high def screen out the door before Apple shipped the new iPad.

Don't get me wrong, more power is always nice but right now it's hard to see what Apple could have done differently. The roadmap is pretty clear right now: the fourth gen iPad will see an A6 based on the next gen architecture and probably receive a big ol' speed boost as a result. The iPhone 6 will certainly get the A6 as well. The one that's still an unknown right now is iPhone 5 but I wouldn't be surprised to see that get an A5 variant on 32nm as they're still going to be close to the wire to get Cortex A15 CPU's out in volume by late summer.

Sorry but when you're talking about using utterly unproven technology the moment it's available for a mass market product like the iPad, well, that seems like a massive and unnecessary risk to me.
 
HAH!!!! Why do you think some apps have progress bars or why do you think particle demos you have 10,000 particles rather than 100,000 or millions? Because the CPU and GPU can't handle them. The video processing apps have long wait times to render the movies. Augmented reality apps are only doing so much yet could be doing so much more if they have more CPU and GPU power.

<walks off muttering to himself: "no app uses the A5 to its fullest my @$$. no app...dumb...m"....">

.... You do know there's more to execution than just CPU and GPU clock speeds, right? Right?

Say, how about storage bandwidth and latency? Memory bandwidth and latency? Cache size? Programmer competency?
 
I wonder if the A5X name is indicating a switch to the iPhone getting the new chip first? So the next iPhone will have the A6, then next year's iPad getting the A6X.
 
This heat will spread to the Li-Ion battery and decrease its lifespan (probably only by a few percent, but still...).

That minuscule amount of heat won't make any difference to the battery, it's only a few degrees warmer than ambient temperature, nothing like the temperatures laptops get to.

The spec-geeks in this thread don't seem to realise that it takes time to develop a CPU, make millions of them, solder them to a PCB and assemble everything into a device. It's not like OEM PC vendors, who can just slap a new pre-made/designed CPU in a slot, or swap out the pre-made/designed motherboard for something different. Each CPU and motherboard in the iOS devices is bespoke, taking much longer to be designed and made than a generic reference design. That's why you don't see small PCs sporting the very latest CPUs, it takes time to design and test a bespoke system.

If Apple waited several months and waited for the 28nm process, they'd spend possibly months more fixing problems at that size, before being able to increase yields and make millions of the chips. With 45nm Apple knows roughly how a design will function prior to it being etched on silicon, whereas that is not the case with the untested 28nm process.
 
T
Power usage and heat. The iPad is a mobile device so weight / battery life are of paramount importance.

And the Ivy Bridge quad core (of the same size) runs cooler and consumes like power? Not.

The A5X has more "Stuff" ( USB controllers and what would be I/O hub stuff in the corresponding Intel chipset. ). It is isn't just an ARM core. That's what the article is pointing out. The majority is not ARM core stuff. All that extra stuff takes up room but not necessarily alot of power.

the big A5X die will likely get quite hot (notice the heat spreader in the teardown?);

Relatively to the A5 perhaps. That may be way they didn't go with package-on-package and mount the RAM on top. However, it is still substantially cooler than the Ivy Bridge die.


The battery went up more so because

1. there are a lot more pixels to light up. ( the screen is the major power draw. More pixels and more light will only make that worse).

2. there are more multi-output radios onboard. (another power draw that is relatively independent. )

3. the additional dual GPU is probably drawing more.

Only one of those has to do with the A5X package.
 
Wirelessly posted (Mozilla/5.0 (iPhone; CPU iPhone OS 5_0_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9A405 Safari/7534.48.3)

Intel really needs to step up it's work on low power phone/tablet processors. With 22nm and trigate their manufacturing processes are several ahead. Heck with AMD working in this direction ARM is going to have some difficult competition in a couple years.
 
We have absolutely no idea of the state of Samsung's 32nm/28nm process. Currently Samsung has not released any SoCs based on smaller process nodes, so we could be months away from stock levels that e.g. Apple needs. Apple has troubles keeping up with the demand anyway so why make the situation worse by utilizing a brand new SOI with low yields? Apple makes better profits from selling a device based on older SOI than selling out of stock signs.

Agreed, but seeing as I don't own any Apple stock, this is immaterial to me. I am more interested in getting devices based on modern process technology if I am paying top dollar for them. The iPad 3 will do alright versus its competitors for a few months, but once Android/Win8 tablets start adopting 28nm SoCs and 4G radios, Apple's software advantage will evaporate. This is the Achilles' heel of their once-a-year release cycles. Of course, next year it's likely that everyone will be stuck at 28nm, and Apple will be back on top (hence why I'm waiting for the "iPad 4").

Sorry but when you're talking about using utterly unproven technology the moment it's available for a mass market product like the iPad, well, that seems like a massive and unnecessary risk to me.

If Apple waited 2-3 more months before releasing the new iPad, chances are Samsung's 32nm low voltage node would've panned out fine (if Samsung was willing to allow Apple access to it, which is an unanswered question). Sure, there might have been shortages for awhile due to less-than-ideal yields, but I also saw a pile of unsold iPads at the Apple Store today when I returned mine...

The spec-geeks in this thread don't seem to realise that it takes time to develop a CPU, make millions of them, solder them to a PCB and assemble everything into a device. It's not like OEM PC vendors, who can just slap a new pre-made/designed CPU in a slot, or swap out the pre-made/designed motherboard for something different. Each CPU and motherboard in the iOS devices is bespoke, taking much longer to be designed and made than a generic reference design. That's why you don't see small PCs sporting the very latest CPUs, it takes time to design and test a bespoke system.

The CPU in the A5X is completely unchanged from last year. This is pretty surprising in the context of other rapidly-evolving mobile devices, and I suspect you'll see Apple's competitors jumping to take advantage of it in short order. Apple made a bet that the retina display will be enough to differentiate the iPad from other tablets into 2013.

As far as anyone knows, Apple is using stock ARM CPU cores and PowerVR GPU cores -- the same as pretty much everyone else in the industry. There's probably a bit of additional engineering going into the Ax chips (power gating?), but not nearly as much as was originally required to design the CPU/GPU components.

And the Ivy Bridge quad core (of the same size) runs cooler and consumes like power? Not.

See post #25.
 
frick said:
The CPU in the A5X is completely unchanged from last year. This is pretty surprising in the context of other rapidly-evolving mobile devices, and I suspect you'll see Apple's competitors jumping to take advantage of it in short order. Apple made a bet that the retina display will be enough to differentiate the iPad from other tablets into 2013.

As far as anyone knows, Apple is using stock ARM CPU cores and PowerVR GPU cores -- the same as pretty much everyone else in the industry. There's probably a bit of additional engineering going into the Ax chips (power gating?), but not nearly as much as was originally required to design the CPU/GPU components.

Apple's A9 cores are very much custom, and and are larger than a vanilla A9 core. I've seen one forum poster suggest they were using a type of logic called m of n encoding.
 
Yes, it handles everything nicely.

But so did the A4.

My worry is that we will not see as many new and innovative apps as we did in 2011 because of the lack of extra processing power.

This is just a misguided worry. The biggest bottleneck was the ram not the cpu. The dual core A9 does just fine. Have you seen what you can do with iMovie or iPhoto to name a just a couple? The doubling of the ram is what will allow us to continue to make more advanced apps while the monster gpu and battery will be able to easily handle the insane screen and continued advancement in gaming. In 12 short months we will see a bigger jump forward with A15 based kit.... in the meantime we are doing just fine with the A5X.

----------

Agreed, but seeing as I don't own any Apple stock, this is immaterial to me. I am more interested in getting devices based on modern process technology if I am paying top dollar for them. The iPad 3 will do alright versus its competitors for a few months, but once Android/Win8 tablets start adopting 28nm SoCs and 4G radios, Apple's software advantage will evaporate. This is the Achilles' heel of their once-a-year release cycles. Of course, next year it's likely that everyone will be stuck at 28nm, and Apple will be back on top (hence why I'm waiting for the "iPad 4").



If Apple waited 2-3 more months before releasing the new iPad, chances are Samsung's 32nm low voltage node would've panned out fine (if Samsung was willing to allow Apple access to it, which is an unanswered question). Sure, there might have been shortages for awhile due to less-than-ideal yields, but I also saw a pile of unsold iPads at the Apple Store today when I returned mine...



The CPU in the A5X is completely unchanged from last year. This is pretty surprising in the context of other rapidly-evolving mobile devices, and I suspect you'll see Apple's competitors jumping to take advantage of it in short order. Apple made a bet that the retina display will be enough to differentiate the iPad from other tablets into 2013.

As far as anyone knows, Apple is using stock ARM CPU cores and PowerVR GPU cores -- the same as pretty much everyone else in the industry. There's probably a bit of additional engineering going into the Ax chips (power gating?), but not nearly as much as was originally required to design the CPU/GPU components.



See post #25.

This is all mostly untrue. Anyone that has run Macs for a while can tell you that OSX and by extension iOS 'LOVES' memory. And the new iPad just got bumped from 512 to 1gb. I fail to see where all the worrying is coming from regarding the dual core A9. First of all, not all A9 chips are equal. This is why the A5, the Exynos, and the Snapdragon all crush the Tegra 2 for example. Everyone is doing things a little differently and Apple happens to be one of the fastest if not 'the' fastest of the lot while running at lesser clock freq.

As I just said in another post... look what you can do with an iPad that you can't even remotely come close to doing on any other tablet product. iMovie, iPhoto, Pages, Numbers, Keynote, MS Office likely this year, more advanced gaming... the list goes on. Memory was the biggest issue and that has been remedied. The A5X will not only have no problem with the best devs can throw at it for the next 12 short months, but it will handle it with ease. Furthermore, iOS software will continue to make a complete mockery of the competitions blown up java phone apps.

And Im not even remotely concerned about Win8 on the tablet. Its easy to get caught up in the hype after they release a consumer preview in Feb for a product that wont ship until late fall at best. By the time that market ramps up we will be looking at iOS 6 and all the new APIs that go with it, and already reading rumors regarding the 4th generation iPad.
 
Wirelessly posted (Mozilla/5.0 (iPhone; CPU iPhone OS 5_0_1 like Mac OS X) AppleWebKit/534.46 (KHTML, like Gecko) Version/5.1 Mobile/9A405 Safari/7534.48.3)

Intel really needs to step up it's work on low power phone/tablet processors. With 22nm and trigate their manufacturing processes are several ahead. Heck with AMD working in this direction ARM is going to have some difficult competition in a couple years.

Medfield looks very promising: http://www.anandtech.com/show/5365/intels-medfield-atom-z2460-arrive-for-smartphones

Agreed, but seeing as I don't own any Apple stock, this is immaterial to me. I am more interested in getting devices based on modern process technology if I am paying top dollar for them. The iPad 3 will do alright versus its competitors for a few months, but once Android/Win8 tablets start adopting 28nm SoCs and 4G radios, Apple's software advantage will evaporate. This is the Achilles' heel of their once-a-year release cycles. Of course, next year it's likely that everyone will be stuck at 28nm, and Apple will be back on top (hence why I'm waiting for the "iPad 4").

But you have to remember that the average buyer of iPad couldn't care less about the SOI of the SoC. One of the main point of tablets is that you no longer have to worry about the specifications. You get a device that "just works". Consumers buy the device based on the specs they can see, and in the new iPad it's the "retina" screen and LTE/4G.

Sure, we geeks like to look at the actual components and I don't think anyone can say they prefer mature 45nm to mature 32nm/28nm (mature being the key word here). The gains are obvious. However, the gains are nothing if you can't provide devices so that users could benefit from the gains. I'm sure Apple would have used 32nm/28nm if it was ready in the volume they need.

As far as anyone knows, Apple is using stock ARM CPU cores and PowerVR GPU cores -- the same as pretty much everyone else in the industry. There's probably a bit of additional engineering going into the Ax chips (power gating?), but not nearly as much as was originally required to design the CPU/GPU components.

But is that a bad thing? Soloing doesn't automatically mean better performance/efficiency. Look at NVidia and Tegra 3. They have decades of experience, yet they are behind Apple and PowerVR in GPU performance. Going totally custom on the SoC requires lots of R&D and the end result may be worse than what you get by using stock designs.

I don't think Apple has any urge or need to be better in raw performance. By using ARM and PowerVR, Apple makes sure they are about on-par with the rest, which is enough. The advantages are elsewhere.
 
Medfield looks very promising: http://www.anandtech.com/show/5365/intels-medfield-atom-z2460-arrive-for-smartphones



But you have to remember that the average buyer of iPad couldn't care less about the SOI of the SoC. One of the main point of tablets is that you no longer have to worry about the specifications. You get a device that "just works". Consumers buy the device based on the specs they can see, and in the new iPad it's the "retina" screen and LTE/4G.

Sure, we geeks like to look at the actual components and I don't think anyone can say they prefer mature 45nm to mature 32nm/28nm (mature being the key word here). The gains are obvious. However, the gains are nothing if you can't provide devices so that users could benefit from the gains. I'm sure Apple would have used 32nm/28nm if it was ready in the volume they need.



But is that a bad thing? Soloing doesn't automatically mean better performance/efficiency. Look at NVidia and Tegra 3. They have decades of experience, yet they are behind Apple and PowerVR in GPU performance. Going totally custom on the SoC requires lots of R&D and the end result may be worse than what you get by using stock designs.

I don't think Apple has any urge or need to be better in raw performance. By using ARM and PowerVR, Apple makes sure they are about on-par with the rest, which is enough. The advantages are elsewhere.

Apple has its own chip design team. The big difference between Apple and the competition is that Apple gets to build what it wants/needs to an exacting part. Everyone else has to buy off the shelf parts from Nvidia, Samsung, Qualcomm and just make them work. This is where Apple just confounds the competition. The new gpu and screen of the new ipad alone pretty much just killed the Tegra3 for example. Now if Asus or somebody else wants to slap in a nicer screen they will be stuck with a gpu that can only just handle it. And it probably cant handle advanced gaming on top of the hi resolution screen.

Btw, in case anybody was curious about the processing power of the new ipad GPU vs the Tegra 3.... new iPad gpu = 32 GFLOPS, Tegra 3 = 12 GLFOPS max. This is why the iPad can crank out 264ppi and still easily handle top notch graphics at excellent framerate.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.