Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
They wont. Even people who bought the first-gen Intel macs, were still able to run all their existing PPC apps, at near-native performance. Considering that most of the work for supporting PPC apps on intel macs is done by providing universal libraries, there is very little for rosetta to do other than translate PPC instructions to Intel, and thus, very little for Apple to test. That's why there really isn't much of a performance hit, and why they pretty much got Rosetta right the first try, unlike WINE...*shudders*

To be fair, WINE has to do a lot more than Rosetta. Rosetta just has to translate code within the same OS. WINE has to translate code between to different OSes with different libraries and driver models.

If they made the switch to ARM, all of our Intel apps would still run via Intel Rosetta, and most likely at near-native speeds, as they did with PPC Rosetta. I don't get why anyone is afraid of this. In the windows world, there is plenty to be afraid of. We all saw how Microsoft handled the 32-bit to 64-bit transition. They fragmented their OS to the point where deving 64-bit apps just makes no sense when some of your customers are still using 32-bit versions of windows, with the lack of anything like "universal binaries". It just adds confusion if you have two versions for people to download. I had to explain to someone why the intel version of OpenOffice wouldn't run on their PPC mac... That's an example of lazy developing, but it's the only way things are done in the windows world.

I think its probably the only few times I can think of where restricting what developers can do has been a good thing. Microsoft has umpteen millions legacy lines of code to support because they didn't give a cutoff for Win32 -> .Net. Apple just has Cocoa.

IMO, It speaks volumes for a development tool when a company doesn't really use it themselves.
 
Last edited:
WINE and Rosetta are about as similar as a dog sniffing on the ground and a pot of clay.

I don't even get where you got this comparison from.

I was speaking to why their performance was so different, and why Apple got rosetta right performance and stability-wise the first attempt (for the most part) and wine still doesn't have either. Did you even read my post?
 
To be fair, WINE has to do much different things that have nothing to do with what Rosetta does.

Fixed that there for you. WINE is not an emulator or a virtualization tool. WINE implements support for Windows' PE (portable executable - what a laughable name) and reimplements much of the Win32 API in a Unix/POSIX compatible way. WINE is basically to Win32 what MONO is to .NET, except WINE can also load DLLs in case it doesn't have an implementation that is native for a particular library.

What does this even have in common with Rosetta ? Get some facts before posting stuff people.

I was speaking to why their performance was so different, and why Apple got rosetta right performance and stability-wise the first attempt (for the most part) and wine still doesn't have either. Did you even read my post?

Again, where did you even get this comparison ? You're again comparing a dog sniffing on the ground to a pot clay, what metric of performance are you even using to compare both ?

Also, WINE has no penalty hit. For stuff that "just works", it works at native speed, because it is all running natively (winelib is just a bunch of function calls that reimplement the Win32 API). Go read up on what WINE is and change your analogy to something that at least bears ressemblance to Rosetta. Right now, your pot of clay can't sniff out bones and bark at the moon.
 
Fixed that there for you. WINE is not an emulator or a virtualization tool. WINE implements support for Windows' PE (portable executable - what a laughable name) and reimplements much of the Win32 API in a Unix/POSIX compatible way. WINE is basically to Win32 what MONO is to .NET, except WINE can also load DLLs in case it doesn't have an implementation that is native for a particular library.

What does this even have in common with Rosetta ? Get some facts before posting stuff people.

Well thats still a lot more work than what Rosetta does. ;)
 
Giuly,

If apple is going to abandon intel, why would intel license their tri-gate tech to them? It would only make it possible for them to lose sales.

Morphingdragon:
chargers and power supplies do get less efficient at higher loads. Heat does tend to kill devices.

The analogy is similar to a car, running the engine close to the max is going to decrease the life of the engine.

Now true, underusing a power supply will drop its efficiency but with a laptop charger supplying only 20 watts at a higher efficiency will still max out at around 35 watts.

Most power supplies hit peak efficiency at 50-75% usage. Maybe this is the reason that the mbp power bricks have such a low rating.

But remember the efficiency is probably only 85%. Therefore the charger will draw more power so that it can supply 85 watts to the computer (and look at how small those chargers are less area for heat dissipation). The CPU can use 45 watts (mbp) and the graphics 30-35 watts (6750m). Not counting screen, hard drive, speakers, wifi, etc. Of course most people are never going to tax the cpu and gpu to the max at the same time. But I could see someone gaming getting close to this.

Most of cooling effect is because more powerful power supplies are larger and therefore provide more surface area for heat dissipation which cools down the power supply.

I've got a windows laptop (quad core sandy bridge with medium grade dedicated graphics). If I use a 65 watt adapter it gets slightly warm. If I use a 130 watt adapter I can't even tell by feeling the adapter if the computer is even on.
 
If apple is going to abandon intel, why would intel license their tri-gate tech to them? It would only make it possible for them to lose sales.

The headine is wrong. The claim is not about abandoning Intel, but x86.
Intel is not interested in licensing technology, but in acting as a foundry.
 
The headine is wrong. The claim is not about abandoning Intel, but x86.
Intel is not interested in licensing technology, but in acting as a foundry.

I was talking about his passage here. I think he means "be" the same not "do" the same. They are not going to give their competitors the tech to beat them.
A lot of things have already been said. I’d like to talk about the most common things a little more.

Intel x86 will always be more powerful than ARM.
Sure, at least in near future. RISC-based architectures like PowerPC, Alpha, MIPS and SPARC were more powerful than CISC-based x86, and probably would still be today. However you don’t see much of them around anymore - because x86 is “nearly as good, but significantly cheaper”. The same goes for ARM, a 2.5GHz quad-core Cortex-A9 CPU might not be a wise choice for a MacBook Pro, but it most definitely blows the MacBook Air’s 1.6GHz Core2Duo out of the water, while consuming 20% less power. This means also, that an ARM core could be enhanced to use those 20%, likewise by adding two additional cores, which should bring it to a performance level comparabile to a Sandy Bridge CPU in that class. The Quad-Core Cortex-A15 is already reaching nearly the raw DMIPS performance as a Quad-Core Phenom II running at 3GHz - while using about 7% of the power.
The same “nearly as good, but significantly cheaper” argument goes for AMD K10 vs. Sandy Bridge as well, if you forget about the fact that synthetical benchmarks are most likely to favor Intel’s architecture anyways. A i7-970 costs 2.5 times as much as the Phenom X6 1100T, though. Will most people even notice the difference in day-to-day use? Probably not.

Ivy Bridge with it’s Tri-Gate technology wipes the floor with ARM.
Sure it does, it brings advantages in either performance or power consumption of “up to” 37% compared to second-generation i7 chips. However, you’re comparing apples and oranges here. The smaller the manufacturing process, the lower the power consumption, and that translates to higher processing power at the same power usage by adding more cores or increasing the frequency of the CPU. The current generation of Apple’s ARM implementation, the A5, is manufactured in 45nm with classic transistors. Given the fact that this matches the Penryn variant of the Core 2 architecture, you have to compare these to the Cortex-A9 - and as stated, ARM exceeds the performance of the lower-end Core2Duos, while consuming 20% less power. A shrink to 32nm will probably be superior the low-end first and second generation i7 chips, while shrinking again to 22nm and implementing the Tri-Gate technology will do the same thing with Ivy Bridge. ARM states that Cortex-A15, the next generation of ARM CPUs we will most probably be introduced to as the “Apple A7”, which is still based on the 45nm manufacturing process, will be offering a 30% overall performance increase over the Cortex-A9 with other factors the same.
If Apple decides to go the ARM route for low-end MacBooks, they intend to ship in high volume. They are predestined to become the third large player in the CPU business behind Intel and AMD. Given the fact that Apple has incredible financial possibilities, there is nothing wrong with the idea of Apple teaming up with Samsung or any other chip manufacturer and developing the manufacturing process needed to build Tri-Gate silicon in an 22nm process. For the engineering of the CPU itself, Apple has already some knowledge in house (PA Semi+Intrinsity), it’s just a matter of extending it. Lastly, the Apple A5 costs $25, NVidia’s Tegra 2 comes in at $15, while the Core 2 Duo SU9600, found in the 11” MacBook Air has a hefty $289 written on it’s price tag.
That leaves enormous room for Apple to invest in this area, and even if Apple does “not know how to make a $500 sub notebook that is not a piece of junk”, which is reasonable given the fact that the Intel CPU alone costs nearly $300, their DNA might allow them to ship a $799 ARM based MacBook Air while enjoying the same 40% profit margin, solely for the fact that they don’t have to pay off Intel anymore. Not to mention that this most probably results in increased sales. I consider this plausible for the simple fact that Apple calls the MacBook Air the “notebook of the future", they need a cheap and powerful CPU which is low in power consumption - and that's exactly ARMs strength.

anyways. If you’re given the choice between a 13” Quad-Core ARM MacBook at $799 and a 13” Dual-Core i3 MacBook Pro at $1199, the question isn’t anymore between ARM vs. x86, the question is whether you can afford it. The lower the scale you go down, the more money becomes an argument. The MacPro is anywhere between $2.499 and $12.000. Add $500, everybody rants, but nobody cares. But removing $200 or 20% from a MacBook while increasing the performance compared to previous generations is a chance for Apple to increse the marketshare in the areas where money matters. This alone justifies a switch to ARM.

I also forgot to say.

Nice biased comparison. A quad core ARM macbook and a dual core i3 macbook pro. Are we talking about the future here? Apple does not even sell dual core i3 macbook pros now let alone in the future. Plus it seems that intel is winding down their i3 line (only 1 i3 sandy bridge). Plus by the time a quad core ARM processor is possible, it will likely be possible for a quad core i7 in a 13 inch macbook.

Intel developed tri gate starting in 2002 (it was production issues that delayed implementation). They have a lot of patents that they are not going to share with anyone.
 
So I just bought a new 4 core Sandy Bridge iMac tonight and now this news breaks. Is ARM actually building anything in any way shape or form that competes with the Intel X86 stuff right now or is this just vaporware at this point?


Enjoy your excellent purchase and don't worry about it.
When it does change in a couple of years you will already be looking for a new machine anyway :)

switching to ARM means no more boot camp :mad:


If windows also gets to run on ARM in the mean time boot camp will still work,it will have to be modified of course.

So this is what's going down... apple just wants to spend less on the processors and still charge a premium?

I want a premium processor for the premium price im paying you jewws...

No need to insult the Jews lol

I read the thread about Apple moving from PowerPC to Intel....

It seems plausible for that one back when I heard about the news due to the fact that it will be a great advantage for macs being able to use windows as well, but I think that this is a bit far-fetched.

Sure, ARM is supported in windows 8 but I think Apple's relationship with Intel is really too great at moment. They get early releases if intel's architecture, and custom designed silicon for their MacBook Airs from Intel.... This shows the great relationship that Apple has with Intel.... they already have a great partnership deal!!

Hard to imagine they just "backstab" intel and switch to ARM....


Yes the TB thing and all that sharing they seem to be in a good place right now... But Apple and adobe were once pretty close too right? Look at the picture now...

I just want to say that the nerd rage on this thread is a gift to any fan of comedy. Fingers crossed that it does not lead to any stabbings.

We gotta have fun somehow :D

A lot of things have already been said. I’d like to talk about the most common things a little more.

Intel x86 will always be more powerful than ARM.
Sure, at least in near future. RISC-based architectures like PowerPC, Alpha, MIPS and SPARC were more powerful than CISC-based x86, and probably would still be today. However you don’t see much of them around anymore - because x86 is “nearly as good, but significantly cheaper”. The same goes for ARM, a 2.5GHz quad-core Cortex-A9 CPU might not be a wise choice for a MacBook Pro, but it most definitely blows the MacBook Air’s 1.6GHz Core2Duo out of the water, while consuming 20% less power. This means also, that an ARM core could be enhanced to use those 20%, likewise by adding two additional cores, which should bring it to a performance level comparabile to a Sandy Bridge CPU in that class. The Quad-Core Cortex-A15 is already reaching nearly the raw DMIPS performance as a Quad-Core Phenom II running at 3GHz - while using about 7% of the power.
The same “nearly as good, but significantly cheaper” argument goes for AMD K10 vs. Sandy Bridge as well, if you forget about the fact that synthetical benchmarks are most likely to favor Intel’s architecture anyways. A i7-970 costs 2.5 times as much as the Phenom X6 1100T, though. Will most people even notice the difference in day-to-day use? Probably not.

Ivy Bridge with it’s Tri-Gate technology wipes the floor with ARM.
Sure it does, it brings advantages in either performance or power consumption of “up to” 37% compared to second-generation i7 chips. However, you’re comparing apples and oranges here. The smaller the manufacturing process, the lower the power consumption, and that translates to higher processing power at the same power usage by adding more cores or increasing the frequency of the CPU. The current generation of Apple’s ARM implementation, the A5, is manufactured in 45nm with classic transistors. Given the fact that this matches the Penryn variant of the Core 2 architecture, you have to compare these to the Cortex-A9 - and as stated, ARM exceeds the performance of the lower-end Core2Duos, while consuming 20% less power. A shrink to 32nm will probably be superior the low-end first and second generation i7 chips, while shrinking again to 22nm and implementing the Tri-Gate technology will do the same thing with Ivy Bridge. ARM states that Cortex-A15, the next generation of ARM CPUs we will most probably be introduced to as the “Apple A7”, which is still based on the 45nm manufacturing process, will be offering a 30% overall performance increase over the Cortex-A9 with other factors the same.
If Apple decides to go the ARM route for low-end MacBooks, they intend to ship in high volume. They are predestined to become the third large player in the CPU business behind Intel and AMD. Given the fact that Apple has incredible financial possibilities, there is nothing wrong with the idea of Apple teaming up with Samsung or any other chip manufacturer and developing the manufacturing process needed to build Tri-Gate silicon in an 22nm process. For the engineering of the CPU itself, Apple has already some knowledge in house (PA Semi+Intrinsity), it’s just a matter of extending it. Lastly, the Apple A5 costs $25, NVidia’s Tegra 2 comes in at $15, while the Core 2 Duo SU9600, found in the 11” MacBook Air has a hefty $289 written on it’s price tag.
That leaves enormous room for Apple to invest in this area, and even if Apple does “not know how to make a $500 sub notebook that is not a piece of junk”, which is reasonable given the fact that the Intel CPU alone costs nearly $300, their DNA might allow them to ship a $799 ARM based MacBook Air while enjoying the same 40% profit margin, solely for the fact that they don’t have to pay off Intel anymore. Not to mention that this most probably results in increased sales. I consider this plausible for the simple fact that Apple calls the MacBook Air the “notebook of the future", they need a cheap and powerful CPU which is low in power consumption - and that's exactly ARMs strength.

If we just look at all the hardware associated topics: It’s plausible - if not even reasonable.

But Mac OS X doesn’t run on ARM
Wrong. Mac OS X and iOS are the same thing, iOS just being a derivate of Mac OS X. They both distributions of Darwin, which is the underlying operation system - just like Ubuntu is a distribution of GNU/Linux, and kubuntu is one of it’s derivates. This comparison is really stating the point: Only the things that you actually see have been changed. Mac OS X incorporates the traditional ideas of windows and point'n'click, while iOS is based on the touch idea. However, this doesn’t make them a different OS, the OS is still Darwin. Both Mac OS X 10.6 and iOS 4 are derivates of Darwin 9.0, both are the first versions that support Grand Central Dispatch for example. Even higher level API changes in AppKit and UIKit corespondent with each other(compare iOS API to Mac API), making the abstraction from Mac OS X only at the needed parts.
So, given the fact that there is a Mac OS X derivate that runs on ARM, porting Mac OS X is less then marginal. You can expect that there is already an iPad 2 somewhere deep in Apple’s labs running Mac OS X with the help of the VGA adapter, a Magic Mouse and a bluetooth keyboard. There is nothing much to state here, it’s not even solely possible, but something that works by default, because Mac OS X is build to be platform in-dependent already, otherwise it wouldn’t run on PPC and Intel. ARM support consists literally of changing the target platform in Xcode to ARM and recompiling. Hard to believe, but true.

BootCamp doesn’t work anymore
Not with your x86_64 version of Windows 7 - but with the ARM version of Windows 8. If you need it for business applications, you’ll be as fine as you can be with Microsoft supporting two platforms simultaneously - you can’t blame that on Apple, though.
ARM CPUs are intended (at first) for notebook usage - MacBook Air and MacBook. If you run games, you don’t want to do it on such a machine in the first place. It will take longer to see an ARM-based Mac Pro, and even if you think of ARM as the architecture that powers phones - if Apple takes ARM serious, it will not even be more powerful than the mobile chips talked about in the first two paragraphs, but also exceed the performance of a Xeon. Although this is a hard road, Apple likely will stick with x86 in the iMac and MacPro lines longer than in the MacBooks and the Mac Mini. There is nothing wrong with supporting two architectures simultaniously and slowly phasing the high-end lines to ARM when possible, or not at all. We had that with the introduction of the Intel Macs, which coexisted peacefully with the PPC Macs - where a PowerMac G5 was faster than a Intel MacBook, too. Apple could have kept it that way, and it would’ve been fine - But the PPCs that IBM was able to deliver weren’t fast enough for any Mac anymore, from the MacBook to the MacPro.

Third-party applications don’t work anymore
For this matter let’s assume the worst case scenario - no x86 emulation whatsoever. However complicated one application might be, Apple has done a great job solving all those problems with the Intel transistion already. Apple usually introduces such a big step in advance, so developers have time to recompile their applications before they lauch the consumer hardware. Usually, application only have to be recompiled to run on ARM, and this happens automatically when a new version of the application is released. Unlike Microsoft’s approach with ARM support on Windows 8, Mac OS X comes with Universal binaries, and runs one version of an application on PPC, x86, x86_64 and ARM. You don’t even notice the underlaying architecture anymore, Mac OS X became the architecture in itself.
If you use applications which are not under current development, you might consider to find a replacement anyways.

Conclusion
After all that has been said, it is clear that transiting part of the Mac family to ARM is not only possible, but also reasonable and the best thing that Apple can do. Given the facts that ARM is “nearly as good” - and potentially even better - “, but significantly cheaper” than Intels chips in the near future, it wouldn’t make sense to go another route in the sense of profitibility.
The fact that people associate ARM with slower mobile phone and tablet devices is a minor problem, as they will be proved wrong by raw facts anyways. If you’re given the choice between a 13” Quad-Core ARM MacBook at $799 and a 13” Dual-Core i3 MacBook Pro at $1199, the question isn’t anymore between ARM vs. x86, the question is whether you can afford it. The lower the scale you go down, the more money becomes an argument. The MacPro is anywhere between $2.499 and $12.000. Add $500, everybody rants, but nobody cares. But removing $200 or 20% from a MacBook while increasing the performance compared to previous generations is a chance for Apple to increse the marketshare in the areas where money matters. This alone justifies a switch to ARM.

No idea why you have been down ranked.
For me it is one of the best posts in the thread,and with some links to support the claims.
Thanks for sharing all the info and insight.
 
Last edited by a moderator:
And This Is Why I Love Shoes

Unlike the "tricknology" industry which couldn't exist without the "constant churn" of always forcing some reason to make you buy new OS, Peripherals, Apps, and computing HW.......shoes are standalone devices that will last for years "as is" without investing another penny.

The tech industry (from top to bottom) loves "Churn".

So does the entertainment industry.....VHS->DVD->BluRay (and having to rebuy all your favorite movies/player HW every time a new format comes out)

Even if you did "draw a line in the sand" with all your current stuff, everything around you will change and eventually the latest browser won't run on your "old" OS, which won't run on your "old" HW, which won't play with your "old" peripherals.....ad nauseum.

I love my clothes and kitchen appliances even more now :D
 
I still don't see any benefit for laptops or desktops to be anything but X86_64 compatible at this time.

It seems that one of the reasons that Macs have increased market share in recent years is that there is a fallback for new users. If they just can't stand OS X, they can always just run Windows on their Mac. That helped convince me to migrate my main desktop into a Mac 3 years ago. Generally, now, I just run Quicken on Windows (and occasionally Office 2010). A new architecture increases the risk.

Where the iPhone/iPad worked is that it wasn't trying to be a desktop/laptop replacement. When people try that, you run into to many limitations.
 
I still don't see any benefit for laptops or desktops to be anything but X86_64 compatible at this time.

It seems that one of the reasons that Macs have increased market share in recent years is that there is a fallback for new users. If they just can't stand OS X, they can always just run Windows on their Mac. That helped convince me to migrate my main desktop into a Mac 3 years ago. Generally, now, I just run Quicken on Windows (and occasionally Office 2010). A new architecture increases the risk.

Actually, in this case the new architecture will reduce risk and increase the true potential of OSX. When we moved to PPC architecture there was no to very little risk, but with the Intel transition the risk was greater than eating chicken pot pie that wasn't properly cooled and is piping hot.
 
Transistor count for comparison:

ARM Cortex A9 dual core - about 26 million transistors
Intel Atom single core - 47 million transistors
Intel Sandy Bridge desktop quad core - 915 million transistors

It has been estimated already that one could build a 40 core Cortex A15 on a 28nm fab process that would consume less than 30 W of power.

I don't think the x86 architecture will be around for another 40 years. MS porting Windows to ARM could be a sign that even the rats are starting to leave the sinking ship.
 
Intel developed tri gate starting in 2002 (it was production issues that delayed implementation). They have a lot of patents that they are not going to share with anyone.

There are many other organizations researching FinFETs since a long time.
 
I don't think the x86 architecture will be around for another 40 years.

Probably not, but it's hard to predict. Intel stopped making new processors that executed x86 instructions in the mid-90s, but somehow x86 (or more commonly now x64) binaries are still around.

In 2051 we may have bio-neural quantum CPUs that still run x64 bytecode as their binary interface.


MS porting Windows to ARM could be a sign that even the rats are starting to leave the sinking ship.

Or it's just MS taking advantage of their long-demonstrated ability to simultaneously support multiple architectures to redefine what a tablet can do.
 
Last edited:
If its anywhere near the speed of Rossetta I don't think a lot of people will care.

Rosetta worked fine because the x86 processors were a lot faster than PowerPC processors at the time. Rosetta translated PowerPC code to x86 code. The code is not as efficient as good x86 code would be (there are always inefficiencies in such a translation), but since the x86 processor was a lot faster, the code actually ran at about the same speed as on a PowerPC. And as soon as OS code is called, that code ran at full x86 speed, giving an actual speed advantage.

With a transition from x86 to ARM, you have the same speed loss, but the less efficient code now runs on a slower processor. So code that is maybe half the speed of good ARM code, running on a slower processor. When the code calls into the OS, the disadvantage is not quite as bad, but still there.

Seriously, you wouldn't want to run an x86 emulator on ARM. Running code that has been recompiled for ARM, that is something else.
 
People who still use assembly in their software are just sad. There is absolutely NO reason to use CPU-specific stuff, not anymore, as we have OpenCL and similar tech for performance-critical parallel computations.

The only field where hand-coded assembly makes sense are interpreters.

You're openly assuming that what I'm saying is CPU-specific. But I'm not. Furthermore, what good is OpenCL if you're primarily using a GPU that will always be more powerful than the CPU? Pretty inefficient if you ask me.

And this is precisely the reason why the inefficient and outdated architecture like x86 is still alive. If Apple has the courage to make the first step towards a better tech: I will applaud them.

Outdated? Funny, the ARM architecture was developed around the same time as the 80286 (the x86 architecture predates about a few years), in 1983. You make it seem like 60 year old politicians are "fresh blood." (I hear they do that a lot in Vietnam)

Furthermore, Apple doesn't yield a commanding lead in market share right now in laptops or PCs (I don't think it even leads). Unless HP or Dell joins Apple in switching entirely to ARM, developers will have no interest in switching to ARM development. Maybe in your little developer basement this is a blessing, but I can safely assume that the vast majority of devs (and their bosses) would not switch to ARM until they know they have a profitable market for it.

You are joking, right? x86 CPU is a completely different pair of shoes from the ARM CPUs. Later can be designed easily. First ones are absolute monsters in terms of complexity. Intel has decades of design experience which all live in their current CPU line. Destroy all the information about Sandy Bridge designs from Intel servers, and it will take them at least 5 years to reconstruct it.

Perhaps. But who is to say Apple is not already working on their own variation of the x86 architecture since the transition? You forget that, more often than not, we only hear rumors of new products after they've spent a couple years on it.

Of course, another issue rightly noted by earlier people is that the A15 architecture will at best match a Core 2 (dated to 2006) in terms of performance when it comes out in 2013. By then, the successor of Core i (based on Haswell) will be out, and will trounce Core 2 ten to twenty fold in comparable chips. Even if it costs half as much, A15 won't hold a candle to whatever Intel offers.

And, the other issue is, a strict limitation to the laptops. Unless Apple intends to kill off the profitable iMac that saved its sorry ass 10 years ago and completely alienate the professional market by killing the Mac Pro, there is no benefit to switching to ARM on the laptops alone. It would require separate and incompatible versions of OS X/iOS/whatever it is then, which in terms of company development is completely inefficient, not to mention outright confusing to the customer.
 
You're openly assuming that what I'm saying is CPU-specific. But I'm not. Furthermore, what good is OpenCL if you're primarily using a GPU that will always be more powerful than the CPU? Pretty inefficient if you ask me.



Outdated? Funny, the ARM architecture was developed around the same time as the 80286 (the x86 architecture predates about a few years), in 1983. You make it seem like 60 year old politicians are "fresh blood." (I hear they do that a lot in Vietnam)

Furthermore, Apple doesn't yield a commanding lead in market share right now in laptops or PCs (I don't think it even leads). Unless HP or Dell joins Apple in switching entirely to ARM, developers will have no interest in switching to ARM development. Maybe in your little developer basement this is a blessing, but I can safely assume that the vast majority of devs (and their bosses) would not switch to ARM until they know they have a profitable market for it.



Perhaps. But who is to say Apple is not already working on their own variation of the x86 architecture since the transition? You forget that, more often than not, we only hear rumors of new products after they've spent a couple years on it.

Of course, another issue rightly noted by earlier people is that the A15 architecture will at best match a Core 2 (dated to 2006) in terms of performance when it comes out in 2013. By then, the successor of Core i (based on Haswell) will be out, and will trounce Core 2 ten to twenty fold in comparable chips. Even if it costs half as much, A15 won't hold a candle to whatever Intel offers.

And, the other issue is, a strict limitation to the laptops. Unless Apple intends to kill off the profitable iMac that saved its sorry ass 10 years ago and completely alienate the professional market by killing the Mac Pro, there is no benefit to switching to ARM on the laptops alone. It would require separate and incompatible versions of OS X/iOS/whatever it is then, which in terms of company development is completely inefficient, not to mention outright confusing to the customer.

"Fresh blood", indeed.

The major advantage that I see in the ARM architecture is that it is intended to be an expandable, modular architecture for ultra low power requirements which allows the licensee the privilege of putting things together in combinations which suit their particular needs, sort of a Lego or Tinker Toy system, if you will.

I do see a place for an ARM based laptop, but only as an alternative to iPad-like devices for people who really do need a keyboard. It would be something along the lines of a Macbook Air or some of the ARM based netbooks that have already been tried.

I simply do not see an ARM based laptop being able to run Lightroom, Photoshop and such in any meaningful way now or in a few years.

If Apple is exploring devices such as that, it would be because they can plug gaps in their product line where they simply have nothing to offer at price points where their competitors do have solutions.

As far as the original article goes, it appears to have succeeded in its intended purpose of getting more hits for the website.
 
I don't think the x86 architecture will be around for another 40 years. MS porting Windows to ARM could be a sign that even the rats are starting to leave the sinking ship.

If anything ARM based Windows has been the sinking ship. Look at Windows CE, which is essentially Windows(looks, feels, and almost works like Windows 98/2000) running on mobile platform.

I do see a place for an ARM based laptop, but only as an alternative to iPad-like devices for people who really do need a keyboard. It would be something along the lines of a Macbook Air or some of the ARM based netbooks that have already been tried.
I think that is what those Smartbooks are kind of being now. They were suppose to be an even lower power alternative to netbooks and come in size that is closer to the Vaio P. Most of them are using the first gen Snapdragon cpu(ARM based as you know) and running some version of Linux and even Android, but I am not even sure if there is a touchscreen on it.
 
Last edited:
In 2051 we may have bio-neural quantum CPUs that still run x64 bytecode as their binary interface.

God damn it I wish people would stop mashing words together when it comes to futuristic computing.

Or it's just MS taking advantage of their long-demonstrated ability to simultaneously support multiple architectures to redefine what a tablet can do.

Run slowly? We already have tablets that do that.
 
This has got to be a stupid idea, I would never buy a mac like that, one of the reasons I bought my first mac was because it runs both OS X and Windows, if I cant run both then I will go back to windows, sorry but not everything I want to do is available on the mac.

There are many users like me who want both systems - going ARM will destroy that market. My wife loves photos, even simple editing, I wouldn't risk switching until I tried it for a reasonable time, but by then I would be back on Windows so no point.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.