Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
As a result, speculation suggests that Apple could be preparing to make a significant announcement that will prominently feature the "Macroscalar" term in a similar way to how the company uses "Retina" to describe its high-resolution iPhone and iPod touch displays.

Is LLVM a marketable term? OpenCL? Even Grand Central?
No, they are not. But the article suggested that it will prominently feature the term.

Why should they?
Again, read the article.
 
Last edited by a moderator:
This idea that Apple doesn't design anything needs to die!

Apple has a very long history of designing hardware.

This doesn't sound like a new idea at all, nor does it sound very important since modern chips do branch prediction to keep the pipelines full and do it well.
One would have to dive into the details of the patent to really know how unique this approach is. However if it is patentable then it must have some unique characteristics.
To me it just sounds like a case of a strange trademark and will probably not amount to anything. Despite the "A4" and the "A5" Apple isn't a chip design company -- like many companies they simply license preexisting designs to create custom system on a chips (SoCs).
This is pure nonsense, Apple has been designing chips and hardware for years. Including the chipsets used on a number of its motherboards. I'm not sure where this nonsense comes from.
Consider, for example, the trace cache, which does a similar thing but based on dynamic execution of the program, and so requires no compiler support.

I have no idea if this wil ever make it into shipping Apple hardware. Rolling your own CPU hardware is a huge task. However they have been researching the inner workings of CPU hardware for years now. That is a lot of money to spend if you have no intention of ever using the R&D in the first place. So I have to think Apple has plans along the line of using this hardware tech.

However the where and how is unknown. They could easily use this tech in the GPU or they could be heading towards a heterogeneous processor. There is an assumption here in this thread that this will be an extension to the ARM core but this is not a given. Beyond that Apple could be working with either Imagination or ARM to roll this IP right into standard cores, Apple doesn't need to take the conventional path here.
 
Non sense!

This is correct if you're using Java, but if you're coding in C then you better care about the chip your code is running on. Statements like this just make you seem like the sucky programmer.

Programmers that write code nobody else can maintain Suck!
Programmers that write code they can't maintain Suck Hard!
Programmers that wright code that doesn't run on new architectures Suck the Hardest.
 
This sounds more like compile-time optimization.

What if, say, the compiler could take a loop apart and cause each iteration to occur in parallel? As an example, take generating a list view. Each cell results in a callback to get the contents of the cell to display. What if the majority of those callbacks and cell generations could happen simultaneously?

Or, perhaps, generating image icons for a thumbnails view?

Some of those things could be done in parallel now, if the developer broke the code up into GCD blocks. But that requires the developer to do the code. What if compiler and processor technology advanced to the point where the developer just codes the basic loop, and the system does the GCD breakdown automatically, on a "macro" scale?

Could be a huge advance in taking advantage of multiple processes and cores.
 
Is it just me or is Macroscalar not really a marketable term?

I know next to nothing about all the technical stuff you've been discussing, so it was an interesting (and partially comprehensible) thread to read through. But I don't see Apple using the term Macroscalar as they have used Retina.

Just some food for thought here: Apple's mission has always been about bringing the "power to the masses", like they did with GarageBand, iMovie and iBooks Author. Will they not someday move towards a closed environment where people create their software with iProgram and publish it exclusively to the Mac App Store? I wouldn't want this to happen, but is there even the slightest chance Apple would ever want this?

Not really. It's likely a feature of their new CPUs that they think is really revolutionary and going to distance them from the competition in terms of battery life and/or performance. Rather than say "Hey, we've got this new processor with some really cool IP that makes it a lot faster!," they decided to connect a marketing name to the architecture technique because they must expect it to be repeated in some capacity.

I don't think it affords them anything in making the IP any more protected than what a patent would already do. They trademarked it because they want consumers (or maybe just developers and I'm over-estimating it, but that seems like a lot of effort to give devs a term to use) to use it and know it. That's why I have hard time seeing it as nothing more than a technique to execute loops faster. Whatever it is, it's the baby of PA Semi, Intrinsity and all Apple's creative forces behind their custom silicon.

If the above is true, I can see them wanting to release an A9 based design rather than A15, given the A15 should be within their reach for the A6. A6 would then be a quad core version of the A5 with this new special sauce silicon baked in. They may even keep it on a 40nm to retire the risk of anything new but this architecture given it may be a long time in the making.
 
What if, say, the compiler could take a loop apart and cause each iteration to occur in parallel? As an example, take generating a list view. Each cell results in a callback to get the contents of the cell to display. What if the majority of those callbacks and cell generations could happen simultaneously?

Or, perhaps, generating image icons for a thumbnails view?

Some of those things could be done in parallel now, if the developer broke the code up into GCD blocks. But that requires the developer to do the code. What if compiler and processor technology advanced to the point where the developer just codes the basic loop, and the system does the GCD breakdown automatically, on a "macro" scale?

Could be a huge advance in taking advantage of multiple processes and cores.

GCD can already do this.

By the way, I had serious trouble posting this on my iPad for some reason, first time too. May want to look into it. None of my text got sent with the posting.
 
Last edited:
with the

that Apple has filed a lawsuit against Motorola Mobility alleging that Motorola has breached a licensing agreement with Qualcomm in its efforts to have a number of Apple's iOS devices banned from sale in Germany. Following a December victory by Motorola in a German court, Apple last week briefly pulled all 3G-enabled products with the exception of the iPhone 4S from its German online store. They were restored within a few hours after the injunction was suspended.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.