Register FAQ / Rules Forum Spy Search Today's Posts Mark Forums Read
Go Back   MacRumors Forums > Apple Hardware > Desktops > Mac Pro

Reply
 
Thread Tools Search this Thread Display Modes
Old Jun 27, 2013, 03:23 PM   #26
cmanderson
macrumors regular
 
Join Date: May 2013
Send a message via Skype™ to cmanderson
Quote:
Originally Posted by deconstruct60 View Post

So is the point that robust NUMA can't be added to the Mach kernel or that Apple just isn't capable , or what?
It should be obvious to most, by now, that Apple has the resources at their disposal to solve any technical challenge (related to their product offerings.) I'd hope the author had that in mind when they wrote what they did. Whether or not they (Apple product managers) see value in doing so is another story.
cmanderson is offline   0 Reply With Quote
Old Jun 27, 2013, 03:27 PM   #27
goMac
Thread Starter
macrumors 601
 
Join Date: Apr 2004
Quote:
Originally Posted by cmanderson View Post
It should be obvious to most, by now, that Apple has the resources at their disposal to solve any technical challenge (related to their product offerings.) I'd hope the author had that in mind when they wrote what they did. Whether or not they (Apple product managers) see value in doing so is another story.
They do, and I don't think the author was ignoring that. But is it worth possibly destabilizing the kernel for all Macs for a very small audience?

Kernel changes aren't exactly fun. They're one of the few places on OS X where if you mess up, a crash brings down the whole machine.
goMac is offline   0 Reply With Quote
Old Jun 27, 2013, 03:35 PM   #28
cmanderson
macrumors regular
 
Join Date: May 2013
Send a message via Skype™ to cmanderson
Quote:
Originally Posted by goMac View Post
They do, and I don't think the author was ignoring that. But is it worth possibly destabilizing the kernel for all Macs for a very small audience?

Kernel changes aren't exactly fun. They're one of the few places on OS X where if you mess up, a crash brings down the whole machine.
I agree.

That's why they would not tread lightly when it comes to such a change, but they have known about the issue for years, otherwise there would have been proper support long ago, rather than sweeping it under the rug, so to speak, which some have pointed out.

They would have many options, one of which would be to fork the kernel for the workstation market, which of course brings it's own problems, or another could be to start working on a complete rewrite of the kernel and spend years building back in compatibility and testing, testing, testing. That's where the "is it worth it?" conversations would be taking place before such a project would even be able to get off the ground.
cmanderson is offline   0 Reply With Quote
Old Jun 27, 2013, 05:13 PM   #29
deconstruct60
macrumors 603
 
Join Date: Mar 2009
Quote:
Originally Posted by cmanderson View Post
It should be obvious to most, by now, that Apple has the resources at their disposal to solve any technical challenge (related to their product offerings.)
That is doubtful when 10.9 is sliding backwards on release because they are changing icons and GUI elements.

The other issue that NUMA isn't all that different from cache differences when cache sizes get bigger. There are still processor affinity issues to be worked on even if don't have dual CPU packages. OS X's highly dubious approach of just evenly doling out process threads is illustrative that they may not have a bleeding edge kernel performance anymore. That Mach is a scary monster they largely keep locked in the closet. They'll tune for power because that works across iOS also, but getting whipped by USB 3.0 driver performance by Windows and other stuff like that smacks as much of can't as don't care.

It wasn't just NUMA that the author mentioned.

" ... I wouldn't be surprised if they've previously tried and got in a terrible mess. The task/thread management is scattered across the Mach and BSD subsystems, and the virtual memory system handles some conditions (such as low memory) terribly, and seems to scale badly to large amounts of RAM and 64 bits. ... "

OS X was stuck at 96GB for how long? Could Apple eventually solve any technical problem. After enough money and enough years probably so. Can they solve a broad array right now? Not so clear.



Quote:
I'd hope the author had that in mind when they wrote what they did. Whether or not they (Apple product managers) see value in doing so is another story.
When the product managers don't see much value in solving hard core plumbing problems the folks who can solve hard core plumbing problems tend to leave. It isn't the money or the resources I question. It is Apple HR management approach that draws questions.
deconstruct60 is offline   0 Reply With Quote
Old Jun 27, 2013, 06:59 PM   #30
deconstruct60
macrumors 603
 
Join Date: Mar 2009
Quote:
Originally Posted by goMac View Post
Pushing OpenCL is also very smart in since OpenCL runs on many many devices (GPUs, CPUs, Xeon Phi.) It gives them a lot of flexibility in future designs.
Not just future designs current ones. OS X 10.9 has Intel OpenCL drivers for HD4000 and HD5000 ( and presumably Iris / Iris Pro ) graphics. That means the number of "Dual GPU" machines is currently up in the millions and will be in the more millions in the next 8 months regardless of what happns with the Mac Pro.

Those machines may not have 2+ TFLOPs of latent peformance, but could probably basically double those systems CPU FLOPs performance by just engaging the GPUs more.

So it is not some narrow "Pro" niche of the upper 1% .

That's what likely makes this have much greater impact. Software that taps into this configuration assumption can actually be useful of a broad spectrum of Macs and therefore developer adoption will happen more quickly. Otherwise it will be ... "well the lunatic fringe ran out and bought the new Mac Pro but what is that.... 0.01% of the deployed Mac Market. Put that feature on the 'maybe do in the future' list ".
deconstruct60 is offline   0 Reply With Quote
Old Jun 27, 2013, 09:41 PM   #31
ekwipt
macrumors 6502
 
Join Date: Jan 2008
I thought the reason for the single processor is Intel hasn't release a dual chip system that can use Thunderbolt yet, I've read that somewhere else I'm pretty sure
ekwipt is offline   0 Reply With Quote
Old Jun 27, 2013, 10:18 PM   #32
deconstruct60
macrumors 603
 
Join Date: Mar 2009
Quote:
Originally Posted by ekwipt View Post
I thought the reason for the single processor is Intel hasn't release a dual chip system that can use Thunderbolt yet, I've read that somewhere else I'm pretty sure
That doesn't make alot of sense because it is likely Apple is using both E5 1600 v2's and E5 2600 v2's in this new Mac Pro. The 12 core chip is a dual socket capable. The 1600 uses the same core implementation infrastructure with primarily just the two QPI links switched off and clocked a different speed.

I heard vague whispers about "hidden microcode" and/or firmware incantation that Intel has to flip on in the CPU for Thunderbolt to work. (that nothing but a fixed subset of Intel CPUs work with Thunderbolt ). Never seen any concrete info to back that up. Also have seen claims that TB would only work with CPUs with Intel iGPUs ( obviously that isn't true ).


AMD doesn't like Thunderbolt

http://www.xbitlabs.com/news/other/d...underbolt.html

If there was actually an incantation that proactively blocked every other CPU but just a few certain Intel CPUs work with Thunderbolt I would expect they'd be yelping about that fact to anyone who would listen.
They don't like it because Intel has 100% control of the production and supply of Thunderbolt controllers.
Thunderbolt just means more money into Intel's pocket. Neither AMD nor Nvidia is going to be a big fan of that. Thunderbolt is only going to further solidify Intel's hold on classic PC graphics.
deconstruct60 is offline   0 Reply With Quote
Old Jun 27, 2013, 10:28 PM   #33
foidulus
macrumors 6502a
 
Join Date: Jan 2007
Quote:
Originally Posted by deconstruct60 View Post
Not necessarily if they are equal microarchitecture implementation.

two 6 cores with quad memory interfaces ( total 8) running at the same speed as one 12 core capped at just a single quad memory interfaces has half the memory bandwidth.

You can wave hands about but some of that is remote ( QPI isn't that slow in latency or bandwidth). It is about the same argument as 3 DIMMs or 4 DIMMs in the current Mac Pro. If the problem solution can leverage more memory ... the additional DIMM is worth the overhead of invoking the multiple rank connection overhead.

The problem with the single package 12 core is that is has a 3:1 ratio of cores to controllers. A dual 6 core would have 3:2 ratio.

Do 12 streams of AES realtime crypto from RAM (no localized into a single DIMM) and the dual 6's would turn in better time.



The question is how large is that a group? With the single package 12 core the upcoming market can peel off those whose workload is not RAM constrained ( past what is reasonably affordable with 4 DIMM slots) and/or isn't memory I/O bound. That is likely a sizable subgroup.
The other thing the dual 6 cores have going for it is more cache(if we are only looking at Intel's offerings then we are talking about 2x the amount of L3 cache, L1 and L2 are the same in both the 12 core single cpu and the dual 6 configs)
foidulus is offline   0 Reply With Quote
Old Jun 27, 2013, 11:33 PM   #34
deconstruct60
macrumors 603
 
Join Date: Mar 2009
Quote:
Originally Posted by foidulus View Post
The other thing the dual 6 cores have going for it is more cache(if we are only looking at Intel's offerings then we are talking about 2x the amount of L3 cache, L1 and L2 are the same in both the 12 core single cpu and the dual 6 configs)
Errr, I don't think so. The E5 building block structure uses layer cake design (horizontally across the die) like this:


[ core ] [ring bus ] [ L3 cache blocks ] [ring bus ] [ core]


( the non-core logic (memory controller, PCI-e controller , QPI etc) is at the top and bottom of the layer cake along with the ring bus going horizontally across the top/bottom to close off the ring around the interior of the die. )
The [core] here includes the core's logic and L1/L2 caches.


To get to 6 cores you stack up three of these layers. To get to 12 you stack up 6 of these layers. There are two 2.5 MB blocks of L3 per layer. So two 3 layers stacks ( 2 * ( 3 * 5MB) ==> 30MB ) or one 6 layers ( 6 * 5MB ==> 30MB )

Same stuff. More cores directly leads to more L3 cache space. ( there are a couple of quirky products in Intel line up where they kneecap the L3 cache a bit for product differentiation. For example E5 1650 6 cores 12MB L3 and E5 1660 6 cores 15MB L3 , but generally more cores more L3. )

As the process technology gets better it becomes more practical to just keep stacking more layers on the cake. Just have to worry about saturating the ring bus and getting the I/O on/off the die to keep up with the increased demands from the cores. But other than that can just continue on with the "core count" war with AMD.
deconstruct60 is offline   0 Reply With Quote

Reply
MacRumors Forums > Apple Hardware > Desktops > Mac Pro

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 04:09 PM.

Mac Rumors | Mac | iPhone | iPhone Game Reviews | iPhone Apps

Mobile Version | Fixed | Fluid | Fluid HD
Copyright 2002-2013, MacRumors.com, LLC