Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
OMG! A rumor on MacRumors about Apple doing something in the future!!!

IT MUST BE TRUE!!!

RUN FOR THE HILLS SCREAMING LIKE A LITTLE GIRL!!!!

I WILL NEVER BUY APPLE AGAIN WHEN APPLE MAKE THIS CHANGE BECAUSE MACRUMORS TOLD ME IT WAS FACT!!!!!

I only read about 3 pages in to this thread... that was pretty much the thrust of most responses... seriously people... it's a rumor? *edit* actually it's not even that. It's just re-reporting someones speculations.
 
With all due respect, you have no idea what I do or do not understand. I understand that ARM architecture is RISC-based architecture that was originally designed for low-power systems. I understand that most of the hundreds of companies licensed to use ARM are not architecture licensees and are not allowed to use ARM architecture to design their own chips. You can obtain an architecture license, and several companies, such as Apple, Marvell, Qualcomm, DEC and NVIDIA, have done so and have designed SOCs based on their own customizations of ARM architecture. I know that most ARM processors cannot compete with modern Core2 processors in terms of desktop computing power, but on the other hand, several companies are working on ARM-based servers. And I know that the performance of ARM processors has increased much more than the performance of Intel processors in the last several years. Whether ARM performance can outstrip Core2 performance remains to be seen, because the relationship between computing power and energy requirements/thermal output is not necessarily linear.

I'm not advocating ARM as a replacement for Core2, but as others have said, any company that does not make contingency plans for possible architectural changes to keep up with technological breakthroughs is dooming itself to eventual failure.

What you write shows your lack of understanding, that's how I know, with all due respect. As I wrote, if a better architecture/chip comes along of course they'll use it, but ARM is not it. It's trying to put a square plug in a round hole. They don't have to make contingency plans by considering ARM, recompiling the OS, etc and finding a compatibility tool for existing code, like the PowerPC transition, is pretty straight forward for any decent architecture, but actually harder for the ARM if they want to maintain decent performance. And Intel won't produce an ARM chip, so you lose their fabrication advantage.

ARMs are being used for servers because they have lower power usage in absolute terms and servers/server tasks can be easily parallelized. They just put a lot of processors on a card. Doesn't mean it will work for user/desktop tasks. ARM processors may have increased in power but only because they started from a low base, they were much slower than modern Intel chips to begin with.

You can regurgitate stuff from the internet as much as you like, it doesn't mean you have an understanding of the issue.
 
Last edited:
That's emulation, not virtualization. QEMU already supports ARM emulation.

What do you mean? HyperV and VMWare are virtualization technologies. Virtualization is the same concept as emulation, but it is at the hardware level (hypervisor), rather then software.

Virtualization is superior to emulation, in my opinion.
 
What about a 32-core ARM processors in a Macbook Air? Same performance per watt, big redundancy... you can turn on-off as many CPUs you need on a certain time. If you need doing something faster, you can basically split a task into various parallel threads. Also, you can place two separated 16-core ARMs in the case for better heat dissipation or other design constraints.

In other words, looks like ARM is a new RISC-like approach. If you need speed, you can basically do a lot of simpler stuff in parallel to reach your performance requirements.

I'm not a computer engineer but moving to ARM doesn't look a so bad idea as much people are saying.

You can't make arbitrary code parallel. Not all programming tasks can be made parallel. You're also asking for a wholesale re-writing of the existing code base, with relatively small return in terms of parallelism, which few developers will jump at. Then you have a chicken and egg problem, with developers not supporting a 16/32 whatever core machine and it having poor performance.
 
Hmmm I just render 2 scene in C4D

Scene One

rMBP = 00:00:29
MP = 00:00:29

Both same it was a simple scene

Scene Two

rMBP = 00:02:56
MP = 00:02:08

As scene get more complex time difference increases ... Still both scenes are really simple ... complex scene per frame take 1hr or more .. then real time difference will show

rMBP = 10,1 (Retina Mid 2012)
MP =4,1 (Early 2009)

That is rather interesting. Are you using FCP X or FCP 7 ?

Hate to tell you this, but adding more cores doesn't magically make your computer run faster. Your software has to be able to take advantage of it before you'll see any gains.

Apple helps programmers to accomplish multithreading with many great technologies, such as Great Central Dispatch.
 
OMG! A rumor on MacRumors about Apple doing something in the future!!!

IT MUST BE TRUE!!!

RUN FOR THE HILLS SCREAMING LIKE A LITTLE GIRL!!!!

I WILL NEVER BUY APPLE AGAIN WHEN APPLE MAKE THIS CHANGE BECAUSE MACRUMORS TOLD ME IT WAS FACT!!!!!

I only read about 3 pages in to this thread... that was pretty much the thrust of most responses... seriously people... it's a rumor? *edit* actually it's not even that. It's just re-reporting someones speculations.

To be honest, Macrumors has been doing a pretty good job with their rumors becoming VERY true this year. Sure makes press conferences disappointing.
 
Also $380 is publicly available price estimate. No hardware manufacturer pays that much to Intel and real price will never come into public.

If you stack enough ARM cores to match i7-2760QM it is going to look like Pringles Tube.

You already mentioned this point in your previous post:

Even if each ARM chip costs $10 it would be $80 which is expensive than what Intel would charge Apple in multi million volume contract.

This case was reviewed in dialog with roxxette:

Agree thats imposible to answear but i dont think will be crazy if they get it at 50 bucks a pcs

Okay, in case your assumption is correct:

$50 = 13% from Intel's "Recommended Customer Price".

Single ARM CPU (from my previous example) costs $5.

Now, imagine how low it would cost in "multi million volume contract" ;)

If the same 13% ratio ==> $0.65 - less than a dollar! :eek: That is truly "dirt cheap" :)
 
I am very surprised. So many computer scientists and parallel programming experts here!
 
Apple helps programmers to accomplish multithreading with many great technologies, such as Great Central Dispatch.

True, but Grand Central Dispatch doesn't make creating a multithreaded application easy, just easier. If you have to offload even the most basic app to multiple CPUs to offset the fact it's single core performance is below par, no one is going to want to use it.
 
What do you mean? HyperV and VMWare are virtualization technologies. Virtualization is the same concept as emulation, but it is at the hardware level (hypervisor), rather then software.

Virtualization is superior to emulation, in my opinion.

Virtualization is "superior to emulation" on a x86 hardware because it's not emulating anything on it, it's just sharing the host operating system's available resources between it and the guest operating systems so that you can run two or more operating systems at the same time on a same machine. That's why you can run Windows perfectly fine via VMWare or Parallels on your Mac without taking much of a performance hit. With virtualization the emulation of a x86 architecture on a x86 would be just a waste of resources, so the software isn't doing that.

Emulation on the other hand is about mimicking the behaviour of some other specific architecture. In order to be fully compatible with the software that you are going to run on the emulator, the software needs to copy the target machine's behaviour very very accurately and this requires a lot of resources. With ARM processors you usually have a very limited amount of resources available to you after the host OS has taken it's own part and it would be quite difficult to emulate any modern x86 cpu fast enough to run a x86(-64) version of Windows on it, let alone any software on top of it.
 
You mean other than the fact that Microsoft would have to approve it and develop drivers for it. Oh and then Apple would also be gutting their iPad sales if their computers had a mobile OS on them as well as OS X.

How would it be gutting iPad sales? You've already bought a very expensive Mac computer so they've already got your cash, it'd be no different than people putting Windows on their current systems.
 
What do you mean? HyperV and VMWare are virtualization technologies. Virtualization is the same concept as emulation, but it is at the hardware level (hypervisor), rather then software.

There is no translation in virtualization. Emulation translates instructions from one instruction set (ARM) to another (x86). Virtualization just feeds the instructions as-is (x86 to x86).

The concepts are different, though sometimes the word virtualization is used to mean emulation (the old VirtualPC software was a x86 emulator that ran on PPC Macs).

Virtualization is superior to emulation, in my opinion.

Not superior, different.

----------

That is rather interesting. Are you using FCP X or FCP 7 ?

Nothing interesting in that. The Mac Pro have much better internal buses, much wider I/O channels, more of them, etc.. etc.. It's not all about pure CPU power, not outside of benchmarks like Geekbench. In the real world, the better CPU might not perform better if the CPU is not the bottleneck to begin with.

Apple helps programmers to accomplish multithreading with many great technologies, such as Great Central Dispatch.

Sure, but Apple itself can't make every instruction independant on the other. Sometimes, code just needs to wait for other code to finish. Nothing anyone can do about that, and no amount of tools or frameworks is going to change that. Blocking code is blocking.
 
Virtualization is "superior to emulation" on a x86 hardware because it's not emulating anything on it, it's just sharing the host operating system's available resources between it and the guest operating systems so that you can run two or more operating systems at the same time on a same machine. That's why you can run Windows perfectly fine via VMWare or Parallels on your Mac without taking much of a performance hit. With virtualization the emulation of a x86 architecture on a x86 would be just a waste of resources, so the software isn't doing that.

Emulation on the other hand is about mimicking the behaviour of some other specific architecture. In order to be fully compatible with the software that you are going to run on the emulator, the software needs to copy the target machine's behaviour very very accurately and this requires a lot of resources. With ARM processors you usually have a very limited amount of resources available to you after the host OS has taken it's own part and it would be quite difficult to emulate any modern x86 cpu fast enough to run a x86(-64) version of Windows on it, let alone any software on top of it.

There are different virtualization technologies. HyperV and VMware ESX are hypervisors. They are running at a lower level then the host operating sytem, which virtualization technologies like Parallels or VMware Desktop runs in (the operating system). With HyperV and ESX, the CPU has to be compatible to run a Hypervisor. The host operating system also has to support the hypervisor, which is why the hypervisor is written into ESX and Windows Server (HyperV).


There is no translation in virtualization. Emulation translates instructions from one instruction set (ARM) to another (x86). Virtualization just feeds the instructions as-is (x86 to x86).

The concepts are different, though sometimes the word virtualization is used to mean emulation (the old VirtualPC software was a x86 emulator that ran on PPC Macs).

Not superior, different.

Yes, emulation has much overhead compared to a virtualization technology that runs a hypervisor. Sorry, I should have been more clear about which virtualization technology I was referring to. The old software virtualization technologies are dead to me. It is all about running a hypervisor these days.
 
Yes, emulation has much overhead compared to a virtualization technology that runs a hypervisor. Sorry, I should have been more clear about which virtualization technology I was referring to. The old software virtualization technologies are dead to me. It is all about running a hypervisor these days.

No it's not. If I want to run my old PPC software on x86, I need emulation. Same if I want to run my PA-RISC enterprise software packages on my IA-64 Integrity servers.

It's different. Just different. Depends on your needs. If you want to consolidate OS platforms onto a single hardware platform and facilite hardware migrations, virtualization is key. If you want to run code compiled for a different architecture/OS, emulation is key.
 
This is off topic but I couldn't let it go. We just moved to Boston from Iowa. Needless to say navigating roads is a bit more complex. I have been taken to the wrong place four times now. Restaurants that had been on the same location for decades and that show up on the map at a completely wrong address. It's also unable to find places often times (I am forced to find them on google, copy te address, and paste it into maps).

So no, it hasn't been fixed. It's really, REALLY bad. And this is Boston we are talking about. Not some ace in the middle of rural Kentucky.

I agree that some people like to just hop on the "lets complain" bandwagon, but that is not the case here. Maps is currently God awful.

Well, you could've let it go because your opinion is just that: another anecdote against the Apple Maps.

So let me give you a few more: I currently live in Switzerland and Apple's implementation seems fine.

Same thing when checking my Parents' place in Brazil.

And same thing when talking to a German colleague who uses Apple Maps on a daily basis.

Perfect? Of course not. But for a 1-month old effort it already looks great, powered by Tom Tom and all.

Not to mention that copycat Google has been dragging its ass just in terms of feature parity between the iOS and Android versions of Google Maps.

At least Apple's Maps are already vector-based and with some really cool things built-in. Don't like it? Go buy a Samsung.
 
There are different virtualization technologies. HyperV and VMware ESX are hypervisors. They are running at a lower level then the host operating sytem, which virtualization technologies like Parallels or VMware Desktop runs in (the operating system). With HyperV and ESX, the CPU has to be compatible to run a Hypervisor. The host operating system also has to support the hypervisor, which is why the hypervisor is written into ESX and Windows Server (HyperV).

Yes, but that still doesn't change the fact that those virtual machines are not emulators and they won't run x86 code on ARM. While both platforms can support Hypervisor, it doesn't mean that they are compatible with each other. You could, however, use ARM processors on a servers to virtualize multiple ARM-compatible operating systems on a same machine.
 
It's not Windows 8. It's Windows RT.

- Can't run Win32/Win64 software
- Can't run .NET software
- Can't join an Active Directory domain

List of things it doesn't do is long. Again, Windows RT is iOS essentially, a locked down, walled garden OS that happens to be called the same thing as its older brother in order to confuse users.

Obviously, it's working as you and others in this very thread have been very confused about Microsoft's ARM endeavour. Bravo Microsoft, you've succeeded.

You can be pretty sure that Chief Enderle-like Trolls like Aiden Shaw will still call it "Windows 8", of course... ;)
 
Well, you could've let it go because your opinion is just that: another anecdote against the Apple Maps.

So let me give you a few more: I currently live in Switzerland and Apple's implementation seems fine.

Same thing when checking my Parents' place in Brazil.

And same thing when talking to a German colleague who uses Apple Maps on a daily basis.

Perfect? Of course not. But for a 1-month old effort it already looks great, powered by Tom Tom and all.

Not to mention that copycat Google has been dragging its ass just in terms of feature parity between the iOS and Android versions of Google Maps.

At least Apple's Maps are already vector-based and with some really cool things built-in. Don't like it? Go buy a Samsung.

Defensive post is defensive. I have nothing more to add. ;)
 
No it's not. If I want to run my old PPC software on x86, I need emulation. Same if I want to run my PA-RISC enterprise software packages on my IA-64 Integrity servers.

It's different. Just different. Depends on your needs. If you want to consolidate OS platforms onto a single hardware platform and facilite hardware migrations, virtualization is key. If you want to run code compiled for a different architecture/OS, emulation is key.

Sorry, I had a brain fart. I know what emulation is. I don't know what I was thinking. When you work on this stuff for many years, things can start to get mixed up in the mind. It is actually a bit embarrassing, but oh well.
 
Last edited:
I think people are being so emotional about this *rumour*. We know *nothing* about their actual plans, but so many are quick to condemn Apple's future based on a rumour, it's laughable.

Imagine they only use ARM chips in low end devices. The average home user has quite a lot of computing power *today* (and some might even argue they had it yesterday, if there existed optimised hardware, OS and applications). How much additional computing power will a home user need in the next 5 years? Why this race and push for more computing power - do the majority of us really need it? Will we be able to take advantage of it, or will it just sit there and be the fastest web surfing device on the planet, using 10% of its computing potential??

Sure some have needs for more computing power (the minority that creates the content for example) - for them, why assume that an ARM architecture must be used in every single product? It doesn't have to be an all or nothing decision, they've already set a precedent with part of their products running on ARM and the other running on Intel. So, over time they move more of the Intel products to ARM, is that inconceivable? Maybe they merge iOS and OS X and create a mobile/tablet/home computing OS, and continue OS X for the more computing intense environments (graphic designers, video editing, medical research, for example).

What are the next big OS requirements (for the home)? What's expected, what would be nice? How much power does that require? Aren't we moving toward devices becoming individually smarter, meaning that we're offloading some of the centralised power requirements of yesterday to devices such as tablets, set top boxes and embedded systems? Rather than scale up chips to perpetually significantly more powerful, shouldn't we be looking to put a chip in every product, and let them take care of their own processing requirements? And is Intel the company to provide those chips?

I think people need to relax about this, the uproar in these two threads over a rumour is so melodramatic.
 
Defensive post is defensive. I have nothing more to add. ;)

I think it's funny how Google didn't have a map app until Apple released theirs. Now they're griping about their knock off not getting approved on the App Store.

Well, maybe if you innovated a little more this wouldn't be a problem for you, Google. Apple invented vectors for their maps. what have you done? Huh? Yeah. Thought as much.

Google is desperate.
 
I think it's funny how Google didn't have a map app until Apple released theirs. Now they're griping about their knock off not getting approved on the App Store.

Well, maybe if you innovated a little more this wouldn't be a problem for you, Google. Apple invented vectors for their maps. what have you done? Huh? Yeah. Thought as much.

Google is desperate.

google has had vectors in their maps for some time......
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.