Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Im not bothered about stupid "M" processors for tablets.

The tablet market has stagnated the enthusiast CPU market for those of us who want (affordable) powerful processors for emulation, video encoding and encryption.

The sooner the tablet market dies out the better IMO.

Yikes.. that's a pretty unaltruistic viewpoint... The masses be damned! What I want is more important! :eek:
 
What's wrong with ARM? It may never be faster than x86 at the same timeframe, but performance will become good enough eventually. Apple will then be able to design the chips themselves, or have their pickings of having it manufactured by TSMC, Samsung, Globalfoundries.

Especially with the rise of Apps, the CPU is becoming increasingly commoditized.

Depends for what, tablets and other lightweight devices can replace traditional desktops and laptops for browsing, viewing videos and Facebook'ing just fine, but ARM devices have a long way to go to replace a computer in the professional fields.
 
Depends for what, tablets and other lightweight devices can replace traditional desktops and laptops for browsing, viewing videos and Facebook'ing just fine, but ARM devices have a long way to go to replace a computer in the professional fields.

Completely agree. I use Office, SQL Server, etc. at work, and that will be a decade or 2 before ARM would even be a possibility.

But for the majority of users, even just an ARM coprocessor that uses 5w instead of 28w for a Macbook would be a major improvement, especially if most of the time, people are just using the browser.
 
We haven't yet hit a wall with Moore's Law. Given Apples continued success with their own processor designs, I think if Intel can find a way to make strides like they're making with the Broad well M, Apple can continue the performance boosts of their A8.

In other words, physics isn't stopping them yet. Until the laws of physics become a hurdle, I expect Apple to continue their trend of doubling the power of their chips.

Moore's law is a rule of thumb, the efficiency/improvements can come from process node (going from 22nm to 14nm) or from processor design. Intel typically does a bit of both. Apple is purely focused on design. Their manufacturer (Samsung or TSMC) will say, "we can build on 28nm" and Apple will design with that in mind. Apple's designs are, no doubt, some of the best. However they're at the mercy of their chip manufacturer for any advancements from processor node. Typically, barring any substantial leap in design, you typically need a little help from both processor design and process node jump to get a doubling in performance.

Except that Apple designs their chips...I get they're somewhat limited by the die sizes and such but....

Doesn't TMSC have a 10-nm process? Its slated for 2015 or 2016?

So Apple can do it, but I think relying on design alone is not enough. Just like any other CPU designer, they can benefit immensely from node shrinks.

TMSC does plan for a 10-nm process, slated for 2016 launch. They're going to remain behind Intel, because Intel's roadmap, is still on target, at least according to Intel. Intel states Broadwell was delayed (14nm) but the 10nm, 7nm and 5nm roadmaps are not going to be shifted too much (meaning Broadwell might have a short life, though of course using Tick-Tock the next CPU design is going to stay on 14nm). So the follow up to Broadwell, Skylake, due end of 2015, will still be 14nm but a fairly new design on that same node which should mean some bring improvements. (Broadwell -> Skylake is like going from Ivy Bridge -> Haswell). Intel may not be able to jump to 10nm until 2016 either. However, even at the same nodes Intel has some design superiority in their chips (their version of FinFET, the tri-gate transistor).

In short, Intel is ahead on processor nodes, for now, and likely through 2015.

True - but again, if Intel can do it, I think Apple can.

Less than a month until we find out!

I guess what you're most concerned about is whether Apple can again double performance. Going from Swift to Cyclone was a huge jump in performance, for a few of reasons. It was kind of like the best case scenario in CPU design and processor shrink which enabled this jump. That sort of alignment of the stars that happened last year, is not necessarily going to be at play this fall.

Last year, Apple benefited from:
  1. Apple had a process node shrink from Swift (A6) to Cyclone (A7), going from 32nm to 28nm.
  2. Apple also licensed the newer ARM architecture and moved from ARMv7 to ARMv8 (so some of the groundwork was done for them)...
  3. upon which Apple's CPU design team made some very excellent design choices

The advance is so tremendous that not many apps actually make use of Cyclone. That, and the fact that competitors are not exactly closing the gap in 2015 should tell us something for what Apple might have planned this year.

First let me justify that the competition isn't close though. Competitors are not close to Apple: Qualcomm won't have a true competitor till 2016, when it comes to it's own design of a 64-bit ARM chip, the 2015 chips are simply licensed 64-bit ARM designs without any Qualcomm magic baked in.

For this reason, I think there's a good chance that this year's chip from Apple will be more of an evolution rather than the revolution that Cyclone was. By creating a minor bump in CPU performance with perhaps a smaller node or design tweaks, they could probably get similar to moderately better performance in a lower power envelope which translates to thinner design or better battery life, or both.

But no one knows what's going to happen with the A8 chip of course, not until a month from now like you said. So we all wait, with anticipation. :)
 
Am I the only one who does not want a 12 inch MBA? I am currently using a 13 inch MBA (mid 2011) and find it the perfect size. I owned a Dell XPS 12 for a few months and found the 12 inch to actually be too small. I seriously hope Apple does not replace the 13" models.
 
Moore's law is a rule of thumb, the efficiency/improvements can come from process node (going from 22nm to 14nm) or from processor design. Intel typically does a bit of both. Apple is purely focused on design. Their manufacturer (Samsung or TSMC) will say, "we can build on 28nm" and Apple will design with that in mind. Apple's designs are, no doubt, some of the best. However they're at the mercy of their chip manufacturer for any advancements from processor node. Typically, barring any substantial leap in design, you typically need a little help from both processor design and process node jump to get a doubling in performance.



So Apple can do it, but I think relying on design alone is not enough. Just like any other CPU designer, they can benefit immensely from node shrinks.

TMSC does plan for a 10-nm process, slated for 2016 launch. They're going to remain behind Intel, because Intel's roadmap, is still on target, at least according to Intel. Intel states Broadwell was delayed (14nm) but the 10nm, 7nm and 5nm roadmaps are not going to be shifted too much (meaning Broadwell might have a short life, though of course using Tick-Tock the next CPU design is going to stay on 14nm). So the follow up to Broadwell, Skylake, due end of 2015, will still be 14nm but a fairly new design on that same node which should mean some bring improvements. (Broadwell -> Skylake is like going from Ivy Bridge -> Haswell). Intel may not be able to jump to 10nm until 2016 either. However, even at the same nodes Intel has some design superiority in their chips (their version of FinFET, the tri-gate transistor).

In short, Intel is ahead on processor nodes, for now, and likely through 2015.



I guess what you're most concerned about is whether Apple can again double performance. Going from Swift to Cyclone was a huge jump in performance, for a few of reasons. It was kind of like the best case scenario in CPU design and processor shrink which enabled this jump. That sort of alignment of the stars that happened last year, is not necessarily going to be at play this fall.

Last year, Apple benefited from:
  1. Apple had a process node shrink from Swift (A6) to Cyclone (A7), going from 32nm to 28nm.
  2. Apple also licensed the newer ARM architecture and moved from ARMv7 to ARMv8 (so some of the groundwork was done for them)...
  3. upon which Apple's CPU design team made some very excellent design choices

The advance is so tremendous that not many apps actually make use of Cyclone. That, and the fact that competitors are not exactly closing the gap in 2015 should tell us something for what Apple might have planned this year.

First let me justify that the competition isn't close though. Competitors are not close to Apple: Qualcomm won't have a true competitor till 2016, when it comes to it's own design of a 64-bit ARM chip, the 2015 chips are simply licensed 64-bit ARM designs without any Qualcomm magic baked in.

For this reason, I think there's a good chance that this year's chip from Apple will be more of an evolution rather than the revolution that Cyclone was. By creating a minor bump in CPU performance with perhaps a smaller node or design tweaks, they could probably get similar to moderately better performance in a lower power envelope which translates to thinner design or better battery life, or both.

But no one knows what's going to happen with the A8 chip of course, not until a month from now like you said. So we all wait, with anticipation. :)

You're not looking backward enough in history.
A5 to Cyclone got about a 2x jump (about 1.6x frequency from process change: 45nm to 32nm), the rest from better micro-architecture.
Cyclone to Swift got a 1.5x jump, ALL from improved micro-architecture (no change in frequency. The [slightly] improved process was used to provide more of everything, not to run it faster.
Assuming naive scaling and an unchanged Cyclone core, it would be reasonable to expect the A8 to run at 1.8 to 2GHz, and the rumors (to the extent they can be trusted) are for 2GHz (though that COULD refer to the iPad version, with iPhone at 1.8 GHz). This is a speed up of around 1.4 to 1.5x.

Simply speeding up the GHz is not enough since now you just wait on memory faster. As I've said, I expect the A8 to ship with a vastly improved uncore: NoC tying items together, and a real L3 cache rather than the slow hack they have on the A7. Add in minor memory controller and prefetcher improvements and they can easily sustain the IPC of Cyclone.

I expect to see the A8 at this 1.5x performance improvement. Moreover this is not the end of the line. I expect there to be a substantially improved core for the A9 --- there are plenty of tricks and tweaks that have been known for years that are apparently not being used by Cyclone. The A8 to A9 transition will be something like the A6 to A7 transition --- an improved process (16nm, FinFET) --- and like the A6 to A7 transition I expect it mostly to be used in an improved micro-architecture. If, for example, Apple decide to ditch the ARM 32-bit instruction support, their design task for the A9 gets easier and they have brain power available to devote to more useful things than backward compatibility.

The list of things they COULD do is long, eg minigraphs (instruction fusion like Intel does, worth about 10%), better instruction cluster steering, move a large amount of instruction decoding to between the L2 and L1 cache, and store decoded and annotated instructions in L1 (easier to do if you have lightweight decoders!), various techniques for more aggressive (ie earlier) physical register reuse. And that's before they even start to consider aggressive kilo-instruction-processing ideas. Hell, on their way to KIP, they could even do things like expand from the current two clusters to three or even four clusters. This might seem like a crazy idea (especially if you retain the current decoder width) but it allows you to run twice as large a ROB and twice as many physical registers without burning more power or hurting cycle time --- basically run on the first pair of clusters till you miss to RAM and fill up the ROB, then switch to the second pair and keep going for another 192 instructions...

Along with these ideas, they've expanded the dynamic range of the CPU each iteration. (Basically the ability to run fast for short periods of times, to make the system feel snappy, while running slower for work that doesn't need to be finished ASAP --- like Intel turboing.) Just like Intel I expect this to improve each iteration so that, in future, we'll be seeing Apple (and other ARM) CPUs rated at something like nominal 2GHz, but able to turbo up to 3GHz.
 
You're not looking backward enough in history.
A5 to Cyclone got about a 2x jump (about 1.6x frequency from process change: 45nm to 32nm), the rest from better micro-architecture.
Cyclone to Swift got a 1.5x jump, ALL from improved micro-architecture (no change in frequency. The [slightly] improved process was used to provide more of everything, not to run it faster.
Assuming naive scaling and an unchanged Cyclone core, it would be reasonable to expect the A8 to run at 1.8 to 2GHz, and the rumors (to the extent they can be trusted) are for 2GHz (though that COULD refer to the iPad version, with iPhone at 1.8 GHz). This is a speed up of around 1.4 to 1.5x.

Simply speeding up the GHz is not enough since now you just wait on memory faster. As I've said, I expect the A8 to ship with a vastly improved uncore: NoC tying items together, and a real L3 cache rather than the slow hack they have on the A7. Add in minor memory controller and prefetcher improvements and they can easily sustain the IPC of Cyclone.

I expect to see the A8 at this 1.5x performance improvement. Moreover this is not the end of the line. I expect there to be a substantially improved core for the A9 --- there are plenty of tricks and tweaks that have been known for years that are apparently not being used by Cyclone. The A8 to A9 transition will be something like the A6 to A7 transition --- an improved process (16nm, FinFET) --- and like the A6 to A7 transition I expect it mostly to be used in an improved micro-architecture. If, for example, Apple decide to ditch the ARM 32-bit instruction support, their design task for the A9 gets easier and they have brain power available to devote to more useful things than backward compatibility.

The list of things they COULD do is long, eg minigraphs (instruction fusion like Intel does, worth about 10%), better instruction cluster steering, move a large amount of instruction decoding to between the L2 and L1 cache, and store decoded and annotated instructions in L1 (easier to do if you have lightweight decoders!), various techniques for more aggressive (ie earlier) physical register reuse. And that's before they even start to consider aggressive kilo-instruction-processing ideas. Hell, on their way to KIP, they could even do things like expand from the current two clusters to three or even four clusters. This might seem like a crazy idea (especially if you retain the current decoder width) but it allows you to run twice as large a ROB and twice as many physical registers without burning more power or hurting cycle time --- basically run on the first pair of clusters till you miss to RAM and fill up the ROB, then switch to the second pair and keep going for another 192 instructions...

Along with these ideas, they've expanded the dynamic range of the CPU each iteration. (Basically the ability to run fast for short periods of times, to make the system feel snappy, while running slower for work that doesn't need to be finished ASAP --- like Intel turboing.) Just like Intel I expect this to improve each iteration so that, in future, we'll be seeing Apple (and other ARM) CPUs rated at something like nominal 2GHz, but able to turbo up to 3GHz.
Wow! Thanks for the reply.

You've done the most thorough explanation. It seems what you're saying though is that while going to A8 will likely be 1.5x, the A9 stands poised to benefit from still many other micro architecture advances that haven't yet been incorporated and could be an even bigger leap.

All I know is I don't think Apple will necessarily push their CPU team this cycle because they have such a large lead and they're still waiting for apps to utilize the power in the A7. From a business perspective it would be more prudent to extend less here and make minor improvements. By 2016 when others are catching up to Cyclone Apple can jump their A9 another generation ahead.
 
That's good to hear as I believe that strictly utilizes the CPU . At least when I use handbrake on windows it only uses the processor. Could explain how others say fans rev up for YouTube, as that utilizes gpu too. May have more to do with gpu than CPU.

I don't know if having the i7 instead of the i5 makes much difference, but if I didn't mention it, my MBA is an i7 (8/512).
 
These teases keep me playing the waiting game. I was definitely going to upgrade my 2010 MBA with Haswell, then I was definitely going to upgrade it with Haswell refresh, now I'm definitely going to upgrade to Broadwell. Each time I'm about to complete my purchase, I look at the total and think "well, my MBA still technically does everything I NEED it to do" and keep waiting. :confused:
 
The discussion of process nodes and processor architecture is one aspect that catches my eye.

On my drive in to work each morning, I can see off to my right, Intel's latest new fab (Chandler, AZ), built at a cost of over $5Bn and currently all but idle. It doesn't have the demand to fill it. Foundary companies are being a little more cautious & circumspect in building new fab capacity, focusing more on moving to the next process node rather than expanding actual capacity. Some companies can't even give their unwanted but comparatively modern fabs away - the market simply isn't there at the moment.

We may not yet have technically hit the Moore's Law wall, but we are very much now at the point where the price per transistor is no longer getting cheaper, and won't get cheaper. I see here talk of 10, 7 and 5nm nodes. The practical issue, in addition to the cost of the steppers and other fab equipment increasing exponentially from process node to process node, is the cost of a mask set. Each chip has its own set of masks, perhaps 40 or more in all, for the 4, 5 & 6 layer metal processes and 3D transistors being used. At these bleeding edge process nodes these cost $millions, and soon $10millions, and the development costs become so high that Moore's Law is usurped by simple economics.

Squeezing 5billion transistors on a die is incredible - but who really needs it? Gamers? Perhaps. A $21Bn industry, according to some - but at $2K or more per system, that's 10 million purchasers per year - call it 15 million to be charitable. Compare that with the 100s of millions each year buying a new tablet, netbook, laptop etc, and you see very different markets and very different dynamics.

I think we're seeing an inflection point in the industry, and those chasing processing power above all else will not grow in number, may even dwindle. Processing power is no longer the industry driver it once was. A sube $300 laptop is clearly now a point where more and more manufacturers are rethinking whether they even want to stay in that market, let alone bring out the next new machine.

Processor architectures haven't changed significantly in a long time - squeezing the most out of architectures is as much about compiler technology as architecture - perhaps more so, especially making use of multiple cores, GPU-based compute engines, or large register files. ARM is a risc design, iX devices are cisc designs - fundamentally different in so many ways, yet somehow similar in where their optimisations take them.

My nearly 5 year old macbook pro is totally fine for what I do with it - I'd like it to be faster and deconding/encoding video, and to do so with less heat generation and better power consumption, and more memory. Sure, I'd love it to be super fast, but if it ends up being quicker than I can give it stuff to do (and it kind of is already), then what's the point? But I can see shareholders having trouble accepting that we're seeing a true maturing of the technology on several levels... I'll shell out for a new one once Intel get their act together and get Broadwell out...

... there's a reason why Broadwell is late, and I think this is a sign of what's to come for the next few cycles - things aint quite as easy, simple, cheap(!) as they used to be!
 
Wow! Thanks for the reply.

You've done the most thorough explanation. It seems what you're saying though is that while going to A8 will likely be 1.5x, the A9 stands poised to benefit from still many other micro architecture advances that haven't yet been incorporated and could be an even bigger leap.

All I know is I don't think Apple will necessarily push their CPU team this cycle because they have such a large lead and they're still waiting for apps to utilize the power in the A7. From a business perspective it would be more prudent to extend less here and make minor improvements. By 2016 when others are catching up to Cyclone Apple can jump their A9 another generation ahead.

I agree that Apple will not push the *CPU* this round. Their focus, IMHO, will be on the uncore (Apple-branded GPU, far better L3, NoC tying everything together).
The first A57's look like they're arriving in about 8 months (AMD's may arrive sooner, but they're targeted at servers so they're not really competing with Apple.) That means that Cyclone will remain the king for at least 8 months, and will probably still only be matched, not beaten, by the various A57 variants. Which means in turn, that Apple can relax on pushing the CPU of the A8 --- simple frequency scaling will do the job to retain the kingship.

More generally, the point is not ONLY performance. The faster you can do the work you need to get done and can put the CPU to sleep, the slower your battery drains.

Also you have to remember the long game here. At the same time that Apple is designing a sub-5W SoC for iOS, there is nothing stopping them designing a 15W SoC for laptops, a 45W SoC for iMacs, and (why the heck not), a 120W SoC for servers (it costs a lot when you're buying $2000 Xeons in Apple datacenter quantities...). Obviously the ideal is to have one design that can, more or less easily, scale this entire range. That may be too ambitious, and you may need to cover it with two CPUs. Either way, if you go to all the effort to add some feature which improves performance at the high end, why not add it at the low-end?

We've had the "Apple will switch to ARM" argument for a while now, and you're welcome to peruse the net to see the latest salvos (spoiler alert: NO-ONE HAS A CLUE WHAT WILL HAPPEN). But there is an air of unreality to the arguments on the pro-x86 side, which all assume that the ONLY feasible strategy Apple has going forward is to take an iOS chip and push it harder. This is just insane, like ignoring the fact that Intel ships BOTH Atoms and Xeons, and these are very different chips. If Apple wanted an x86 replacement, the obvious thing to do would be to establish a new design team and give them a target power of, say, 45 down to 15 W...

Apple can't be very happy about the fact that it's going to be 11 months before quad-core Broadwells (ie high-end Mac, rMBP, mac mini) ship. They likely had plans involving retina displays, 4K, and h.265, announcements about 4K movies in the iTunes store, a new h.265/4K ready AppleTV, etc etc, that they wanted to unveil more-or-less simultaneously across the entire product line.
Intel has thrown ALL that for a loop, and Apple will have to recover as best they can with a staggered layout (MBA first, but then all the models that take a quad-core variant a lot later). It won't be nearly as dramatic, and I expect Apple is NOT PLEASED.
This, as much as the cost of Intel CPUs, has (IMHO) given put a lot more weight to the faction inside Apple that would be arguing for a Mac switch to ARM. I expect the various internal paper designs for "my dream ARM CPU" are, even as we speak, being aggressively modeled, laid out, and generally prepped for release in 2016 or so.
 
5 Watts is the point I've been waiting for. I've been wanting to build a file server using FreeNAS. i tried and Atom CPU and it was not enough for the performance level I wanted.

(FreeNAS is BSD based, just like Mac OS X)

I hope you are considering ZFS and ECC Ram.
 
... there's a reason why Broadwell is late, and I think this is a sign of what's to come for the next few cycles - things aint quite as easy, simple, cheap(!) as they used to be!

That's probably accurate. Intel has also fundamentally shifted directions, away from raw performance and towards lower power consumption. The threat of ARM is undoubtedly what pushed them in that direction. As many others have mentioned, there's a much larger market out there for mobile devices with longer battery life than there is for a faster desktop computer. When the vast majority of users are simply surfing the web, a current-gen i7 is already serious overkill.

The vast improvement in Intel's integrated graphics is also worth mentioning. That could eventually have profound effects on the mobile market, and perhaps even the desktop market.
 
You may not care what most people do, but Apple sure does, because those people are their customers. And Intel cares about what Apple cares about, because Apple is one of Intel's customers.

Buy a PC if you want thick powerful machines.

That's what I am going to do next time as Apple stopped making serious stuff.

----------

This can already be done....
but why would you want a laptop that is in inch thick? that is thicker than the old MBPs

So why is it not done, instead of wasting time making even thinner laptops with the same weak performance?

I say 1 inch thick as it is aggressive enough given that current Xeon laptops are over 2 inches thick.
 
What's wrong with ARM? It may never be faster than x86 at the same timeframe, but performance will become good enough eventually. Apple will then be able to design the chips themselves, or have their pickings of having it manufactured by TSMC, Samsung, Globalfoundries.

Especially with the rise of Apps, the CPU is becoming increasingly commoditized.

As you just said, ARM is slower. And compatibility will be annoying, at least temporarily. I'd rather they don't make a switch (or partial switch) they don't have to make.

----------

I know what an inch is. The Xeon laptops are actually quite a bit thicker, at around 6cm.

There would be a massively bigger market for those at 1 inch, instead of the current tiny niche market.

I don't care if most people do basic stuff with their laptops.

I don't understand the need for CPU power. Why do you suddenly "need" more power as soon as processors get more powerful? The current Retina MacBook Pro is way faster than the beefiest laptop of 4 years ago. If you REALLY need it, you're better off running everything on a home desktop PC and queuing up jobs over SSH on your laptop.
 
Yikes.. that's a pretty unaltruistic viewpoint... The masses be damned! What I want is more important! :eek:

The masses already have stuff that works very well for them, while heavy users have been stuck with weak laptops for years now.
 
I don't understand the need for CPU power. Why do you suddenly "need" more power as soon as processors get more powerful? The current Retina MacBook Pro is way faster than the beefiest laptop of 4 years ago. If you REALLY need it, you're better off running everything on a home desktop PC and queuing up jobs over SSH on your laptop.

I don't suddenly need more power, I have been stuck with quad core for a year now.

Remote computing cannot always be done.
 
What's wrong with ARM? It may never be faster than x86 at the same timeframe, but performance will become good enough eventually.

That's a promising tagline for Apple ads of the future. "Introducing the new Apple MacBook Pro - Good enough."
 
These teases keep me playing the waiting game. I was definitely going to upgrade my 2010 MBA with Haswell, then I was definitely going to upgrade it with Haswell refresh, now I'm definitely going to upgrade to Broadwell. Each time I'm about to complete my purchase, I look at the total and think "well, my MBA still technically does everything I NEED it to do" and keep waiting. :confused:

Five years is old enough! Get the Broadwell with Retina, you'll love it. Probably late 2014 or early 2015. There have been so many technological leaps since 2010. The all-SSD PCI-e hard-drive is almost worth it alone. Retina is also gorgeous.

----------

That's a promising tagline for Apple ads of the future. "Introducing the new Apple MacBook Pro - Good enough."

The Chromebook may file a trademark suit in that case.
 
(FreeNAS is BSD based, just like Mac OS X)
Barely... What would become the OSX codebase was forked from the BSD one in the mid 80's, and that's almost 30 years ago by now.

On my drive in to work each morning, I can see off to my right, Intel's latest new fab (Chandler, AZ), built at a cost of over $5Bn and currently all but idle. It doesn't have the demand to fill it. Foundary companies are being a little more cautious & circumspect in building new fab capacity, focusing more on moving to the next process node rather than expanding actual capacity. Some companies can't even give their unwanted but comparatively modern fabs away - the market simply isn't there at the moment.
The reason why Intel, IBM and Globalfoundries aren't expanding is simply that the markets those companies are mainly making products for aren't growing. Server, mainframe and specially PC markets have all stagnated and the PC market has even been decrasing. The market that has actually been growing and fast is the mobile market where despite a lot of effort, Intel hasn't been able to gain ground and ARM based solutions reign supreme.

Sure, the Playstation 4 and Xbox One, which are both built around variations of the same APU by AMD and made by Globalfoundries, have sold well and AMD is once again flush with cash because of this, but it's hardly enough to stop the downhill of x86.

Companies like TMSC and Samsung's foundry division who make ARM chips for various manufacturers have on the other hand been expanding, even to some extent on U.S soil (Samsung's Austin TX plant). Intel could basically snuff these companies out and get a lot of new business by offering the same kinds of contract manufacturing to fabless chip companies like Apple. However that would basically kill their x86 chips intended to compete with ARM based solutions and Intel generally prefers controlling a whole market rather than just being a foundry.

mschmalenbach said:
We may not yet have technically hit the Moore's Law wall, but we are very much now at the point where the price per transistor is no longer getting cheaper, and won't get cheaper. I see here talk of 10, 7 and 5nm nodes. The practical issue, in addition to the cost of the steppers and other fab equipment increasing exponentially from process node to process node, is the cost of a mask set. Each chip has its own set of masks, perhaps 40 or more in all, for the 4, 5 & 6 layer metal processes and 3D transistors being used. At these bleeding edge process nodes these cost $millions, and soon $10millions, and the development costs become so high that Moore's Law is usurped by simple economics.
Moore's law doesn't speak about transistor density, it just talks about the total number of transistors. When they basically hit the wall they'll probably be able to keep it going for some time just by increasing the size of the chips. You're probably going to say this will make chips too expensive, but I don't think so because at that point the production lines will have been paid off and yield problems will have been ironed out. At that point it'll just be maintenance and the cost of equipment per chip cm2 will have come down significantly.

mschmalenbach said:
Squeezing 5billion transistors on a die is incredible - but who really needs it? Gamers? Perhaps. A $21Bn industry, according to some - but at $2K or more per system, that's 10 million purchasers per year - call it 15 million to be charitable. Compare that with the 100s of millions each year buying a new tablet, netbook, laptop etc, and you see very different markets and very different dynamics.
While not every device needs to have a massive amount of computing horsepower, one thing that pretty much all device categories have in common is that they're in a perpetual race for better performance. Don't see this race for who's got the fastest device stopping any time soon.

mschmalenbach said:
I think we're seeing an inflection point in the industry, and those chasing processing power above all else will not grow in number, may even dwindle. Processing power is no longer the industry driver it once was. A sube $300 laptop is clearly now a point where more and more manufacturers are rethinking whether they even want to stay in that market, let alone bring out the next new machine.
The reason why the market has begun to gravitate towards mobile devices like laptops, smartphones and tablets is not because people are tired of performance increasing over time, but people getting a taste for the usability and portability of these devices. The reason why Intel has still failed to get it's foot in between the door in the smartphone and tablet space is because it ignored that market and focused on chips that were ether too big (everyting between servers and laptops) or too small (microcontrollers).

mschmalenbach said:
Processor architectures haven't changed significantly in a long time - squeezing the most out of architectures is as much about compiler technology as architecture - perhaps more so, especially making use of multiple cores, GPU-based compute engines, or large register files. ARM is a risc design, iX devices are cisc designs - fundamentally different in so many ways, yet somehow similar in where their optimisations take them.
A point I'm going to have to make is that CISC isn't really a type of design, it's something the people behind the RISC idea used to classify literally everything else. Also, while x86 may seem CISC:esque on the surface, underneath at the microcode level it's actually very much RISC like.

Sure, we haven't seen any major shifts like to a new type of architecture, but existing ones have seen some pretty major upgrades. In x86 at the microcode level things have changed very significantly and a lot of new instructions have been added. Specially math computation has improved pretty significantly trough the addition of advanced vector instructions and general performance has been significantly improved by better branch prediction, out of order execution and cache replacement policies. These improvements are so big that people are now calling modern chip architectures "Post RISC".

ARM has seen some even bigger improvements first trough the jump to the v7 instruction set (the original and 3G iPhones were still v6) and now everyone is scrambling to get ther v8 chips to market.

mschmalenbach said:
... there's a reason why Broadwell is late, and I think this is a sign of what's to come for the next few cycles - things aint quite as easy, simple, cheap(!) as they used to be!
I suppose that's just the cost of progress when you get far enough... It becomes harder and harder to improve upon what you already have.
 
Last edited:
Yep, it is silly. With the 2011 MBP line they failed to make a laptop that can keep cool WITH a fan.
Plenty of people who had a 500 Euro logic board replacement done by Apple (several times!) and even that didn't fix the problem.

I'm just saying I wouldn't trust Apple in keeping laptops cool enough to last longer than 2-3 years.

Funny you should be mentioning radeongate. My late 2011 15" just a week ago died from the same cause. I have a reflowing station so managed to get it up and running again, but the question is - how long will it last. Perhaps best to sell now with a disclaimer and wait for broadwell. Not happy about the situation though, the performance of the laptop (especially with upgraded ram and ssd) is more than sufficient.. Surely one shouldnt need to upgrade due to reliability concerns when paying this much for an apple "pro" product.

----------

These teases keep me playing the waiting game. I was definitely going to upgrade my 2010 MBA with Haswell, then I was definitely going to upgrade it with Haswell refresh, now I'm definitely going to upgrade to Broadwell. Each time I'm about to complete my purchase, I look at the total and think "well, my MBA still technically does everything I NEED it to do" and keep waiting. :confused:

You might be waiting for skylake then, it will be rolling out next year not long after broadwell.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.