Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If you distribute programs as llvm bitcode, and compile them as part of the installation process, then there is no need for fat binaries.

Do you have any idea how long it would take to compile an entire operating system? Google Chrome already takes forever, so does WINE, and so does any major app. Nobody would have the patience to sit there and watch it compile, outside of the geek world that is. Universal Binaries are the answer. They take up slightly more space on the drive, but HDD space is not as restrictive as it used to be. I don't see the advantage.
 
Oh my god. Please explain how you would turn this into _one_ set of llvm code:

Code:
#include <stdio.h>
int main (void) {
# if defined (_i386_)
  printf ("This program runs on an x86 32 bit processor\n");
#elif defined (_x86_64_)
  printf ("This program runs on an x86 64 bit processor\n");
#elif defined (_arm_)
  printf ("This program runs on an ARM processor\n");
#else
  printf ("This program runs on some processor that I don't know of\n");
#endif
}

That is in fact an interesting question with possibly interesting answers.

I guess the real goal for most programmers is NOT having to worry about that kind of things. You don't want to take care of which processor you are running on, typically you have higher-level things to take care of.
... but for that you need a virtual machine. Which LLVM is, kind of.
 
Do you have any idea how long it would take to compile an entire operating system? Google Chrome already takes forever, so does WINE, and so does any major app. Nobody would have the patience to sit there and watch it compile, outside of the geek world that is. Universal Binaries are the answer. They take up slightly more space on the drive, but HDD space is not as restrictive as it used to be. I don't see the advantage.

The OS could still be distributed traditionally.

And compiling from bytecode to native is typically MUCH faster than from source to native; you could say close to realtime (remember that LLVM has / can be used as a JIT). That apart from the fact that C++, the language in which Chrome is written, is notoriously slow to compile.

... but even then, LLVM compiles C++ much faster than GCC, for example, even though looks like C++ support is still not finished.
 
I think we could get ARM in something new.

Think something in between an iMac and an iPad that is basically a flat wedge shaped device that is meant to sit on a desk that runs iOS. Hardware would not need to be much more powerful than the current A5, power requirements of the larger screen would be negated by it being a desktop device.

Kinda like the touch screen iMacs that are so often rumored.
 
This is just a bunch of nonsense.
ARM processors are an order of magnitude away from Intel CPUs in terms of speed. There's no way ARM could just come up with a redesigned CPU and catch up just like that. To develop a Sandy Bridge competitor would take many years, maybe a decade, and Intel isn't exactly sleeping are they? You can't just throw together a bunch of ARM cores and think it'll be just as fast as an Intel CPU.

This whole idea of a switch to ARM is utter nonsense. ARM are very good for mobile, low-power devices. But if you need processing power you have to go to Intel or AMD. Period. To those who say that the vast majority doesn't need processing power: You're wrong. You'll see how many people are buying netbooks and tablets over notebooks and desktops. Many will, but the vast majority won't for the next 10 years. Also: how are ARM / Chrome OS / Android netbooks doing? Not exactly fine.

Also, the idea that MSFT 'must' switch to ARM is silly. If they wanted a decent tablet they could just as well take their ARM-compatible phone OS and enhance it the same way Google and Apple are doing it. I'm guessing that it was a strategic decision made by the mighty Windows Desktop division inside MSFT to go with the full Windows OS and trim it down for tablets. It will be a mistake.
 
I think we could get ARM in something new.

Think something in between an iMac and an iPad that is basically a flat wedge shaped device that is meant to sit on a desk that runs iOS. Hardware would not need to be much more powerful than the current A5, power requirements of the larger screen would be negated by it being a desktop device.

Kinda like the touch screen iMacs that are so often rumored.

What about the opposite: a full(er?), less/unrestricted OS X for iPad. Add a good base/charger, and you cover the same use cases and more. Would even negate any Android theoretical advantage.
 
Oh my god. Please explain how you would turn this into _one_ set of llvm code: [snip: three #ifdefs]
You're saying fat binaries are better? By that time (more than 2 years) Apple won't be selling any 32 bit processors. Although llvm will probably be able to handle that without any ifdefs (are Java programs typically littered with ifdefs?). What you need to represent is mostly program logic and library calls (since Apple uses lots of shared libraries). There is no question Apple wants to ditch gnu and move to llvm. I can't see them doing fat binaries again, but then again nobody can predict the future.
Do you have any idea how long it would take to compile an entire operating system? Google Chrome already takes forever, so does WINE, and so does any major app. Nobody would have the patience to sit there and watch it compile, outside of the geek world that is. Universal Binaries are the answer. They take up slightly more space on the drive, but HDD space is not as restrictive as it used to be. I don't see the advantage.
I don't know why you think I was referring to the OS when I said program, that would be extremely silly. If it takes too long for a given program, they could always compile two versions. But producing a program from already optimized bitcode is faster than compiling from source.
 
Last edited:
The main advantage of 64bit apps on the Intel platform (other than increased memory addressing) of having access to more registers is simply not a problem with ARM processors and since the new ARM processor has 40 bit addressing which allows the computer to address more than enough RAM for the foreseeable future of consumer and even pro machines there is no real backwards move at all.
How long till ARM is able to keep up with or surpass Intel in clock for clock performance?
 
I'm curious why this doesn't make sense -- as I understand it, the primary advantage of 64-bit architectures outside of specific scientific applications is simply the ability to address more than 4GB of RAM. If you have a system to address that (e.g. a 40-bit address space), where's the problem?

Particularly if a 32-bit, many-core ARM architecture allows significant power per watt advantages.

Once you get beyond the basic embedded CPU and start taking on features to the CPU ARM doesn't have any advantage - ARM's advantage is the ability to scale down but scaling up it is no more efficient than what Intel has to offer.

The main advantage of 64bit apps on the Intel platform (other than increased memory addressing) of having access to more registers is simply not a problem with ARM processors and since the new ARM processor has 40 bit addressing which allows the computer to address more than enough RAM for the foreseeable future of consumer and even pro machines there is no real backwards move at all.

There is more to 64bitness than access to more memory and registers; there is an arstechnica article that talks about the Core 2 CPU into great detail, once you read that then you'll understand why the comparison between ARM and what Intel has to offer is ridiculous.

The ARM processor in iPhone / iPod Touch / iPad is 100 percent source code compatible with 32 bit x86, unlike x86-64. Memory alignment is no problem in 95% of all cases. It is a very small performance hit if the programmer insists on packed data structures, that covers 99.5% of all cases. In the remaining cases, problems are automatically fixed by the OS.

But that is the case for any well written application; if you've written your application cleanly then moving to x86-64 should be a relatively easy affair.
 
Unless you are a video editor like myself, then the specs really do matter ;)

One cool feature in sandy bridge is Quick Sync!!! TO handle video encoding! If something like that was added to an ARM chip there would be no need to use the cpu cores! Not to mention you could just add a video card to take on the heavy processing for video and photo apps... The cloud might take care of the processing workload if everyone has their pictures and videos uploaded there already.
 
There is more to 64bitness than access to more memory and registers; there is an arstechnica article that talks about the Core 2 CPU into great detail, once you read that then you'll understand why the comparison between ARM and what Intel has to offer is ridiculous.

I quickly read that article and the conclusions it draws pretty much just back up what I said. The advantage of 64 bit on Intel is the increased number of registers as well as the (obvious) increase in the amount of RAM that can be addressed. In fact the very last paragraph of the article sums it up perfectly:

Note that I attributed the CS performance increase to x86-64's larger number of registers, and not the increased register width. On applications that do not require the extended dynamic range afforded by larger integers (and this covers the vast majority of applications, including games), the only kind of performance increase that you can expect from a straight 64-bit port is whatever additional performance you get from having more memory available. As I said earlier, 64-bitness, by itself, doesn't really improve performance for anything but the rare 64-bit integer application. In the case of x86-64, it's the added registers and other changes that actually account for better performance on normal apps like games.

Should Apple move from 32-bit PPC to 64-bit PPC, Mac users should not expect the same kinds of ISA-related performance gains that x86 software sees when ported to x86-64. 64-bit PPC gives you larger integers and more memory, and that's about it. There are no extra registers, no cleaned up addressing scheme, etc., because the PPC ISA doesn't really need these kinds of revisions to bring it into the modern era.

The bit in bold is basically saying that the reason you get a speed increase moving to 64 bit on Intel (other than the registers and extra RAM) is simply because the Intel platform suffers from several (well known) issues that other chips simply do not. So they already have the increased performance from these "fixes" even in their 32 bit incarnation simply because they never suffered from the problems.

Since the ARM processors have never suffered from having such a low number of general purpose registers like the Intel CPUs this is not a problem and since they have 40 bit addressing which can address more than enough RAM for the time being then the RAM issue also goes out of the window. Coupled with the fact that the ARM FPU has twice the number of double precision floating point registers as Intel CPUs, the same number of 128 bit SIMD registers, much lower power consumption and heat generation (so you can fit more in one machine without frying it) and it is not beyond the realm of reason to assume that in a couple of years Apple could successfully make the switch to ARM in their consumer line.
 
Wow I always wondered what became of Gassée -- back in the day I was something of a BeOS enthusaist (and would-be user except no useful apps were really ever writtin for it)
 
I quickly read that article and the conclusions it draws pretty much just back up what I said. The advantage of 64 bit on Intel is the increased number of registers as well as the (obvious) increase in the amount of RAM that can be addressed. In fact the very last paragraph of the article sums it up perfectly

I just read a bunch of Jon Stokes' articles -- very informative and clear, thank you for the link. As far as I understood it all, the issue indeed has nothing to do with 64-bitness -- but that Intel's current processors *are* much faster than ARM's processors at the same clock rate. This is due to all sorts of neat tricks that ultimately allow the processing of multiple instructions per clock cycle -- and it is precisely these neat tricks that take up more die space and consume more power.

Now the conclusions make sense -- in order to scale up to Intel-like processing power, ARM would lose much of its power consumption advantage. This may not necessarily be a problem, as the ultra-high-end chips would be in desktops anyway, but that it then becomes an issue of playing on Intel's field, and as everyone says, they have a lot more fab prowess and experience in this area.

That said, I think Apple is up to something with iOS and ARM. There was an article in the May EE Times about how some 40% of the A5 die had an unclear purpose, and that Apple could be "holding back" some hardware-acceleration feature for the future. When you have the power to integrate your higher-level libraries (Core Graphics, Core Video, etc.) with transistor features on your chips, interesting opportunities for speed boosts may arise.
 
Last edited:
Now the conclusions make sense -- in order to scale up to Intel-like processing power, ARM would lose much of its power consumption advantage. This may not necessarily be a problem, as the ultra-high-end chips would be in desktops anyway, but that it then becomes an issue of playing on Intel's field, and as everyone says, they have a lot more fab prowess and experience in this area.
True, but imagine a CPU that could use core(s) X for heavy duty lifting, that's no better or worse than an Intel CPU. Heck, maybe it IS an intel CPU.

Then, you go on battery power. If all I'm doing is Facebooking, and writing papers, I don't need the power of an i7. Heck, I probably wouldn't need much more than the power of a Pentium 3... Which an ARM CPU can hold its own to. So you use core(s) YY, which are designed specifically for low-power. And all of a sudden, you can eek out 16 hours of battery life, or make the MBA even smaller!
 
but that Intel's current processors *are* much faster than ARM's processors at the same clock rate. This is due to all sorts of neat tricks that ultimately allow the processing of multiple instructions per clock cycle -- and it is precisely these neat tricks that take up more die space and consume more power.

No one is disputing the fact that Intel processors are faster than ARM chips. It is just the fact that ARM chips will be near enough to Intel performance in the next couple of years to be fine in Apple consumer products. Something I do believe will happen eventually.

I'm not entirely convinced that the newer ARM processors will lose their power consumption advantage. One area that ARM have much more experience in than Intel is designing power efficient CPUs. I expect that experience to deliver some results in their new CPUs.

Plus they may decide to take a different route for their own "clever tricks" that by passes the limitations of Intel's tricks. Obviously this just speculation though on my part.

Now the conclusions make sense -- in order to scale up to Intel-like processing power, ARM would lose much of its power consumption advantage. This may not necessarily be a problem, as the ultra-high-end chips would be in desktops anyway, but that it then becomes an issue of playing on Intel's field, and as everyone says, they have a lot more fab prowess and experience in this area.

ARM have never ever manufactured chips themselves (other than testing ones I assume) and they won't start now so the whole production thing is a non-issue. ARM design chips and license it off to companies that do have the experience and production capabilities (Samsung for instance and now Apple with their A5).

That said, I think Apple is up to something with iOS and ARM. There was an article in the May EE Times about how some 40% of the A5 die had an unclear purpose, and that Apple could be "holding back" some hardware-acceleration feature for the future. When you have the power to integrate your higher-level libraries (Core Graphics, Core Video, etc.) with transistor features on your chips, interesting opportunities for speed boosts may arise.

Interesting. Do you have a link to that at all?
 
Then, you go on battery power. If all I'm doing is Facebooking, and writing papers, I don't need the power of an i7. Heck, I probably wouldn't need much more than the power of a Pentium 3... Which an ARM CPU can hold its own to. So you use core(s) YY, which are designed specifically for low-power. And all of a sudden, you can eek out 16 hours of battery life, or make the MBA even smaller!

Probably not...
Intel CPUs are excellent at power consumption in idle states. They turn off a lot of excessive silicon, which pretty much amounts to the dual CPU solution you're suggesting.
Of course, an ARM would probably use even less power, but switching between them would be a nightmare (requiring at least a reboot, if it could be done at all, which is a huge stretch). The Intel CPU can switch up in milliseconds.
And in these low-power situations, the screen suddenly becomes the bigger power issue, anyway.
 
ARM have never ever manufactured chips themselves (other than testing ones I assume) and they won't start now so the whole production thing is a non-issue. ARM design chips and license it off to companies that do have the experience and production capabilities (Samsung for instance and now Apple with their A5).

Right, though it's my understanding that Intel is somewhat ahead of everybody else in fab technology, and they generally don't fab outside designs with their most advanced plants.

Interesting. Do you have a link to that at all?

Yep, here you go: http://www.eetimes.com/electronics-news/4215094/A5--All-Apple--part-mystery
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5)

a Mac without adobe master collection. I don't know about that.

I agree, with every fiber of my being.
That would be an epic fail.
 
Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_1 like Mac OS X; en-gb) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8G4 Safari/6533.18.5)

Why must Microsoft move to ARM? I don't understand

They arent. They're simply adding support for ARM processors.
 
I'm quite happy with my quad desktop. If they make at least a quad core desktop chip that has support for large amounts of ram, then we can talk. Till then, they should stay out of the desktop arena.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.