Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
The iPad 2 scores 750 on Geekbench, which is roughly the same as the Intel Atom N270. But the Apple A5 has a TDP of 0.5 watts while the N270 has a TDP of 2.5 watts. In other words it provides the same performance at 5x the energy efficiency.

Now keep in mind that the ARM cortex A9 may operate at up to 2.0GHz (at 1.9 watts) while Apple is clocking the A5 at 1GHz. Such a configuration would beat any Intel Atom chip currently available in both performance and power consumption. The only problem is that no systems are using such a configuration at the moment (unless you want to buy a development board).

Ok, so when you said latest you probably weren't referring to Intel Atom. But ARM simply isn't targeting higher performance markets than this right now. We'll see what the future holds.

Yeah, that's great that it will beat the atom processor... what macs use the atom processor though? None. And atom processors don't even come close to the performance of the very old core 2 duo, and none of the i series processors would even bat an eye at the thought of an ARM processor.

Now I understand that arm will improve and get faster within the next few years, but to think intel isn't going to be developing their tech at same time is absurd. Even within 5 years I don't see arm competing with what intel has to offer. ARM may reach similar levels of performance to some of intels current high end processors within 5 years, but by that time intels latest processors will still blow arm away.

Perhaps at that point we could see arm processors in some types of systems for the improved battery life they provide and less heat, but it will be a loooooong while before we see "professionals" using arm in their workstations imo.
 
The ARM processors were originally designed to work in desktop computers.

The British computer company Acorn, created the ARM (or Acorn RISC Machine as it was originally known) to power their line of desktop computers which became the Acorn Archimedes. We used to use them at school in the early-late 1990's and they were pretty good machines. At the time they were quite powerful and were quick at loading the OS and software.

I think I read that Apple saw the potential in Acorn's idea and they worked on an ARM processor for Apple's Newton PDA and then from there, the ARM became very popular with mobile devices.

As it was used in desktop computers before, I think there is certainly a possibility for it to happen again, but as many people have already said, they do need to become more powerful to cope with modern computer demands.

Here's the details on the Acorn Archimedes (I think it was more of a British PC and don't think it was popular in America): http://en.wikipedia.org/wiki/Acorn_Archimedes
 
Dynamic CPU switching cannot happen until the instruction set architecture is abandoned as the primary interface between hardware and software. Until we have a more abstracted runtime environment where program state is abstract enough that an entirely different architecture can be "passed the torch" to continue where another left off it will not be possible. And emulation doesn't fit the bill, because the goal to begin with was power efficiency. If programs were written in a common byte code that was executed with JIT compilation to machine code this would be possible. But this common runtime doesn't even exist today (unless you wanted to use Java, lol) and no one in the industry is even talking about it.

This development is probably 10 years away.

Who said anything about dynamic CPU switching...
Does Apple even do dynamic GPU switching?

There was a really good article around the time of tiger about quartz changes by ars technica. Page 13 is a run down of Quartz history in os X page 14 the Quartz in 10.4.

but the important image was this one...
quartz-10.4-sm.png


Replace the Graphic card box with Apple SoC, then what's it matter which chip the App is running on x86 or ARM.

If the ARM chip is powerful enough to run this openGL scene for a desktop machine, with enough left over to handle all the input handling, basic IO and have some overhead for running one basic app (although I don't think the A5 hits that mark above a 13inch screen) then does the system need to switch context.

Apps could run on either processor and just feed back the Quartz commands to the SoC or which every GPU has command of the Quartz Compositor at that time.

I wouldn't be surprised if this is how they do GPU switching Quartz Compositor just runs on the cheapest energy processor shutdown the expensive one as soon as it's not demanded. Or They have a system to allow quartz to duplicate itself to the more powerful GPU and seamlessly switch the monitor over.

The processor specific code of the app would never need to move.
An App could even have a core running on ARM but spawn Async Threads to the x86 processor, using c-blocks to encapsulate the data.

Happy to have someone smarter than me tell me why I'm wrong.
 
Who really needs an i7 or an i3 in a laptop? 95% of users go on facebook, email, etc and nothing else otherwise the iPad wouldn't sell so well. Casuals are the core market for apple now not the professionals. An ARM macbook would sell like hotcakes because it would get 15-20hr battery life, be thin as hell, and be cool as hell and could be called a LAPTOP again. Look at the asus transformer running crappy android. They can't keep it in stock and it's basically an ARM laptop. Nobody says it's too slow or it can't render my 30GB video files because those people are the minority.

The current macbooks get so hot they burn your lap and that's because of x86. ARM chips will have bunches of cores later and be even better while still taking a fraction of the power of intel chips. This year there are already a few ARM desktops running at 2GHz and they are sufficient for most users. You guys need to face reality and that is the casual market is where the $$$ is at.
 
Who said anything about dynamic CPU switching...

Um trapper1204 did. The person I quoted in my post.

Does Apple even do dynamic GPU switching?

Yes, they do in the new Macbook Pros between the integrated Intel GPU and the discrete AMD GPU. It's based on whether any running programs are using OpenGL or other graphics heavy APIs. Previous NVidia based Macbook pros did this as well.

There was a really good article around the time of tiger about quartz changes by ars technica. Page 13 is a run down of Quartz history in os X page 14 the Quartz in 10.4.

but the important image was this one...
View attachment 285109


Replace the Graphic card box with Apple SoC, then what's it matter which chip the App is running on x86 or ARM.

If the ARM chip is powerful enough to run this openGL scene for a desktop machine, with enough left over to handle all the input handling, basic IO and have some overhead for running one basic app (although I don't think the A5 hits that mark above a 13inch screen) then does the system need to switch context.

Apps could run on either processor and just feed back the Quartz commands to the SoC or which every GPU has command of the Quartz Compositor at that time.

I wouldn't be surprised if this is how they do GPU switching Quartz Compositor just runs on the cheapest energy processor shutdown the expensive one as soon as it's not demanded. Or They have a system to allow quartz to duplicate itself to the more powerful GPU and seamlessly switch the monitor over.

The processor specific code of the app would never need to move.
An App could even have a core running on ARM but spawn Async Threads to the x86 processor, using c-blocks to encapsulate the data.

Happy to have someone smarter than me tell me why I'm wrong.

I'm afraid that I can hardly begin to tell you where you're wrong. Your post is kind of all over the place.

Could you spawn x86 threads from an ARM executable? Currently the answer is no. But even in the future the answer would be no. Threads communicate via a shared memory space. You can't have threads with different CPU architectures because you wouldn't be able to maintain cache coherency and the different architectures might also have different memory consistency models that conflict. There are probably other reasons too. But let's suppose for a moment that weren't a problem.

Amdahl's law http://en.wikipedia.org/wiki/Amdahl%27s_law implies that the maximum speedup obtainable by parallelizing a program is limited by the length of time of the serial portion takes. For that reason you wouldn't want to have a master ARM processor with slave x86 processors. Your speedup would be limited by the speed of the ARM processor. This is why the sony cell has one powerful IBM POWER processor and 8 less powerful SPEs where the computation can be offloaded. You want the master processor, which is executing the serial portion of the program, to run as fast as possible.
 
Last edited:
Um trapper1204 did. The person I quoted in my post.

I'm afraid that I can hardly begin to tell you where you're wrong. Your post is kind of all over the place.
Ummm.... this is what Trapper1204 said.

My prediction.. and it should be noted that i am alway right, is that Apple will use the same technology they are using to switch between graphics card, to dynamically switch between CPU's.
You jump on three words "Dynamic CPU switch". What he said was the same system they use for Dynamic GPU switching.

When if the look at how Quartz works as a system Apple could have implemented GPU switching by keeping Quartz Compositor on one GPU with the frame buffer and having the more powerful GPU feed back rendered windows to the compositor vram instead of direct to the frame buffer. I understand in this system the Integrated Graphic never powers down. So under that set up of Quartz an Application can render their own windows on GPU or CPU then pass the results to the VRAM. Applications that create work that needs the more powerful GPU use the more powerful GPU but when that work is exhausted that GPU shuts off.

If that is the system that Apple uses for GPU "switching" then it could be extended. Although would you really call that "switching" it's more advantageous usage as the lower GPU never switched off it's always doing something.

The same Advantageous usage could be applied to Applications. The Applications wouldn't need to move CPU's at all, they just write the rendered windows to the GPU in charge of the screen. Work that needs the x86 would run on the x86 if that work runs out then that CPU powers down.

That would have the benefits of Dynamic CPU switching but it wouldn't have the cache issues that everyone is hammering as making it impractical. The only thing that would have to move between CPU's would be data not code.

Sorry for being all over the place.
I really should have avoided what seems to be the red herring in this situation. Code and running apps moving. They don't need to move between CPU's. They just need to move data in much the same way they already move data.
 
Ummm.... this is what Trapper1204 said.


You jump on three words "Dynamic CPU switch". What he said was the same system they use for Dynamic GPU switching.

When if the look at how Quartz works as a system Apple could have implemented GPU switching by keeping Quartz Compositor on one GPU with the frame buffer and having the more powerful GPU feed back rendered windows to the compositor vram instead of direct to the frame buffer. I understand in this system the Integrated Graphic never powers down. So under that set up of Quartz an Application can render their own windows on GPU or CPU then pass the results to the VRAM. Applications that create work that needs the more powerful GPU use the more powerful GPU but when that work is exhausted that GPU shuts off.

If that is the system that Apple uses for GPU "switching" then it could be extended. Although would you really call that "switching" it's more advantageous usage as the lower GPU never switched off it's always doing something.

The same Advantageous usage could be applied to Applications. The Applications wouldn't need to move CPU's at all, they just write the rendered windows to the GPU in charge of the screen. Work that needs the x86 would run on the x86 if that work runs out then that CPU powers down.

That would have the benefits of Dynamic CPU switching but it wouldn't have the cache issues that everyone is hammering as making it impractical. The only thing that would have to move between CPU's would be data not code.

Sorry for being all over the place.
I really should have avoided what seems to be the red herring in this situation. Code and running apps moving. They don't need to move between CPU's. They just need to move data in much the same way they already move data.

I actually have a feeling that the way you described it is not the way Apple has implemented dynamic GPU switching. This is because PCI-e bandwidth is so low compared to the memory bandwidth of the discrete GPU (we're talking several GB/s versus several hundred) sending data over PCI-e should be avoided. But hey, I could be wrong. I honestly have no idea how they are doing it.

But anyway, ok, I get what you are saying. Makes sense.

One worry, however, is that the x86 might never "run out". For example in this situation all it would take is one legacy background process (suppose the process can only run on x86) and all in a sudden the x86 processor has to be on permanently. This problem doesn't happen with GPU switching because background processes aren't using graphics.
 
Oh, NO!
Not yet another hardware transition and emulation.

Now with AppStore they don't need it - they impose the rule that everything uploaded there must be a multiplatform x86+ARM binary and within a month 90% of apps (that are actually sold and used) will become such.
 
I actually have a feeling that the way you described it is not the way Apple has implemented dynamic GPU switching. This is because PCI-e bandwidth is so low compared to the memory bandwidth of the discrete GPU (we're talking several GB/s versus several hundred) sending data over PCI-e should be avoided. But hey, I could be wrong. I honestly have no idea how they are doing it.

But anyway, ok, I get what you are saying. Makes sense.

One worry, however, is that the x86 might never "run out". For example in this situation all it would take is one legacy background process (suppose the process can only run on x86) and all in a sudden the x86 processor has to be on permanently. This problem doesn't happen with GPU switching because background processes aren't using graphics.


It seem that nVidia Optimus version for windows has the PCIe bandwidth limitation. In their system the monitor is wired to the Intel IGP and the nVidia gpu writes into the frame buffer over PCIe.

Hey like you I don't know how Apple do it either. It just seem like that was the easy way to do it without them relying on a particular hardware vendor. Hey even nVidia said they didn't know how Apple did it using their hardware.

Legacy is going to be the other big issue. The system shouldn't be an issue now that iOS has sponsored a ground up rebuild of the core OS. A background app would still be big worry.
 
backwards compatibility

if apple can allow all the old programs to be used on the arm architecture, then i am all for it
 
You do realize that this was mostly driven by multi-national corporations that didn't want to pay software engineers money to update all of the ancient legacy software right? Do you also believe MS wanted IE6 to stick around for 10 years? :rolleyes:

Since Balmer never wrote a piece of code in the past twenty years, yes. M$ is as lazy paying software engineers as its customers are.
 
Not again

NO! NO! NO! NO! I made it through the PPC to Intel switch I JUST bought my first Macbook and they may go to ARM? I <3  But i will not go through another one of these, if they do this im going to PC's( :( )
 
I've been thinking about this for a few weeks, and the more I think about it, the more I think there must be some mistake. In theory I can see an Air like product using ARM CPUs...something that's low power and low performance. Except that I can't even see that...if Apple was really interested in going even lower power, well that's already available with things like AMD's e350 and Intel's Atom.

Yes, ARM is supposedly having more competitive products in a few years...but that's versus low end x86 hardware of TODAY, and before Intel's 2 year cycle on Atom kicks in to gear.

So even on the low end/small form factor front, I don't get this. Heck, IMO it makes more sense to move iPad from ARM to x86 this year or in the next two, than the other way around.

Besides just not making much sense from even a power standpoint, you're talking killing compatibility...AGAIN. Apple's broken stuff between OS releases, but completely cut things eventually between TWO previous CPU transitions, and one complete OS transition. Continuing to do that just kind of makes it a toy platform, and it's backwards compatibility is already dismal next to what Microsoft provides.

And...it loses Windows compatibility. Maybe. Of course Microsoft is adding ARM support for the next Windows release, and quite possibly that'll mean it could run on an ARM Mac too, not that that's a sure thing. But that still potentially means compatibility issues, depending on what sort of emulation Microsoft provides.

Basically, I'm not getting this. If they want to go lower power, they already can. By the time ARM has options that would supposedly work for this (on the low end), Intel and AMD will have better too.

On the high end? There is no high end. There's not even a low end for ARM yet...high end? What they're supposedly offering in a few years isn't competitive with what Intel offered years ago. Are they ceding the mid-range/high end market?

I've even though, maybe a $500+ notebook? But again...there are other existing options, with better to come.

This just doesn't make sense, and makes investing in OS X stuff a bit scary, given for all we know if this is actually true...
 
Now with AppStore they don't need it - they impose the rule that everything uploaded there must be a multiplatform x86+ARM binary and within a month 90% of apps (that are actually sold and used) will become such.


This would be one sure way to transition the Photoshop crowd to Windows. :eek:
 
Moving to ARM will be a bad idea. That's all I can say. :mad:

This would be one sure way to transition the Photoshop crowd to Windows. :eek:

SO glad I'm not the only one horrified by this.

The only thing I can think is if instead of this being about their notebooks being dumbed down, it's actually about their iOS devices being smartened up...err...as in a real OS X PC replacing an iPad or something, if that makes sense. (I originally hopped the iPad was running real, full OS X, like with an optional touch interface....)
 
The iPad 2 scores 750 on Geekbench, which is roughly the same as the Intel Atom N270. But the Apple A5 has a TDP of 0.5 watts while the N270 has a TDP of 2.5 watts. In other words it provides the same performance at 5x the energy efficiency.

The advantage isn't that it provides the same performance as a slow Intel chip. The advantage is that ARM can start with processor design tricks that crank up both the performance and the TDP by up to 5X and then add more performance tricks on top or instead of that, whereas Intel is already at max TDP for many of their product niches, so is stuck with inventing more limited or expensive performance boosting tricks that don't increase TDP.
 
Servers?

The one bit of "news" in this article that I am interested in, is the subject of servers. It would be great if Apple would get back into the server market, since they killed XServe, their previous offering. OSX is a robust operating system that is powerful enough for enterprise computing, and it would be great to have a 1U or 2U-sized machine. Just a thought...
 
that would be admitting to a mistake....

The one bit of "news" in this article that I am interested in, is the subject of servers. It would be great if Apple would get back into the server market, since they killed XServe, their previous offering. OSX is a robust operating system that is powerful enough for enterprise computing, and it would be great to have a 1U or 2U-sized machine. Just a thought...

Unlikely, since that would be publicly admitting that killing the XServe was a mistake. The Steve doesn't like publicly admitting to mistakes.

Back when the XServe was killed, I said several times that Apple should have partnered with one of the tier 1 server vendors (e.g. Compaq HP) to have a supported version of Apple OSX Server running virtualized on a small number of server configurations (a 1U, a 2U, a 4U and a 7U, for example).

But, since it looks like Apple OSX Server itself dies with Snow Leopard - not gonna' happen.
 
Unlikely, since that would be publicly admitting that killing the XServe was a mistake. The Steve doesn't like publicly admitting to mistakes.

Back when the XServe was killed, I said several times that Apple should have partnered with one of the tier 1 server vendors (e.g. Compaq HP) to have a supported version of Apple OSX Server running virtualized on a small number of server configurations (a 1U, a 2U, a 4U and a 7U, for example).

But, since it looks like Apple OSX Server itself dies with Snow Leopard - not gonna' happen.

Isn't all the server plumbing in Lion? (Just asking - I heard it was, but haven't investigated myself).
 
The one bit of "news" in this article that I am interested in, is the subject of servers. It would be great if Apple would get back into the server market, since they killed XServe, their previous offering. OSX is a robust operating system that is powerful enough for enterprise computing, and it would be great to have a 1U or 2U-sized machine. Just a thought...

some tech rag had a commentary about the new apple data center pic Steve flashed during WWDC and it turns out that Apple is a big HP server customer.

xserve is dead. otherwise it would have been in their data center
 
Isn't all the server plumbing in Lion? (Just asking - I heard it was, but haven't investigated myself).

Yes, it's also in 10.5, 10.4, 10.3, 10.2....

Just like Windows Server and Windows client are built from the same sources, and are +98% identical bits.

It's more of a symbolic issue - Apple kills all server hardware, and then kills the server SKUs for the OS.

Over in the "No more Rosetta in Apple OSX 10.7" thread there are lots of posts saying "you should have seen the writing on the wall".

Killing server hardware and eliminating separate SKUs for server software are definitely "writing on the wall".


xserve is dead. otherwise it would have been in their data center

Yes, XServe is dead. XServe was a hobby. You don't build a billion dollar datacenter with hobby servers.

Even if Apple still made XServes and Apple OSX 10.7 had a server SKU - they still wouldn't have used it.

Maybe the main reason that Apple killed the XServe was so that they wouldn't be asked "why isn't the data center using XServes?" !!
 
Last edited:
the writing was on the wall when HP released 1U servers that hold 6 or 8 drives. now it's 8, i think a few years ago it was 6. 2U servers now hold 16.

you would have to buy a lot more xservers to equal one HP server
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.