Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
And it's been said many many times here that's not the entire picture. GPU, 32bit vs 64 bit geekbench and many other things are not factored in. Plus the fact it's not real world app or process benchmarks.

Indeed, it's the MR motto: never let the whole picture get in the way of a good rant. (But most of the ranters here are not actual pros that would bother to consider other factors as you suggest and more than willing to make knee-jerk conclusions on only circumstantial and incomplete evidence. Schools out for the summer!)
 
With this performance bumps from gen to gen I would say that we are reaching the post silicon era I haven't seen nothing substantial since ivy bridge introduction

by the way Nice performance for a Blender :D
 
Anyone still complaining about the new MP needs to read this topic.
https://forums.macrumors.com/threads/1594354/
And watch the WWDC 2013 video called "Painting The Future" where Pixar uses the MP. It's just insane what can be done on the new MP. And it's free to register as a developer to watch these videos. I've done that.

https://developer.apple.com/wwdc/videos/index.php
The videos are here for you to watch. And if what I read is true (I'll find out soon once the video is downloaded) then no one has to worry about what the MP can do in the real world. If Pixar love it, then I'm sure it'll be just fine for what anyone here wants to do with it.

Are you saying that whatever Pixar showed won't be able to be done significantly faster on workstations with twice the CPUS, twice GPUs and twice the RAM? Or is what Apple gives us just good enough?
 
by the way Nice performance for a Blender :D

Tell that to Tom Dickson. He's the blender expert.

iPhone4S_will_it_blend.jpg
 
Are you saying that whatever Pixar showed won't be able to be done significantly faster on workstations with twice the CPUS, twice GPUs and twice the RAM? Or is what Apple gives us just good enough?
I'm saying raw specs don't tell the entire story. What you do with the spec and how you optimise them with apps and the OS matters too. Hence we're all waiting for some real world benchmarks to appear. Or at the very least a 64bit geekbench (to compare the cpu only).

I'm not going to bash a not as yet released to the general public for same machine like so many other are here. I'll wait and let it tell me the story once it's released.
 
Just because it's a "desktop" computer doesn't mean it has to take up a significant fraction thereof...

...And when you do need to move it, it's easy to and you don't have to clear off several square feet of new desk space.

Call me crazy, but I'd rather not have a computer on my desk at all. Space is already at a premium dealing with 3 monitors, audio monitors, keyboard, mouse, and wacom tablet. Why would I want the hardware taking up even more space considering how infrequently you need access to it?

The term "desktop" is as antiquated as "laptop."


And watch the WWDC 2013 video called "Painting The Future" where Pixar uses the MP. It's just insane what can be done on the new MP. And it's free to register as a developer to watch these videos. I've done that.

I wouldn't get too worked up over the software demo. Yes, it was impressive. But more than anything it was a demo of the software capabilities rather than the hardware. There's a reason they said it was the best "out of the box" performance they've seen. We won't truly see the power of this machine until it gets out in the real world. Until then, it's all just meaningless speculation. But I suppose that's part of the fun with these forums.

I will say, though, the Mari demo definitely has me contemplating their 40% off sale.
 
disappointed

It's nuts that the graphics choice,lol. Wait a minute, there is none. How can they not at least offer Titan or Quadros for this "Professional" workstation is absurd.
 
32bit vs 64bit Geekbench

Let's just ignore that it is unreleased+beta testing etc, and keep to the fact that this is the 32bit version of Geekbench. I myself did a simple test on my desktop winpc(Amd phenom 1090t+8 gb ram 1333) and the difference was 30%. From a 8669 score to a 11531 one. If you do the math (i know that my analogy might be out of real basis)you might end up with a score of 31791 in the 64bit version. Of course i dont really know if the results shown here of the present Mac pro's were based on 64bit or 32bit version.
 
Indeed, it's the MR motto: never let the whole picture get in the way of a good rant. (But most of the ranters here are not actual pros that would bother to consider other factors as you suggest and more than willing to make knee-jerk conclusions on only circumstantial and incomplete evidence. Schools out for the summer!)

Eh, "pros" can certainly jump to rash conclusions with the best of them. It's just they'll eventually come back down to reason once it's actually time to make a purchase decision. And I put that word in quotes because it has kind of lost all meaning and value around these parts by now.
 
Oh noes! They taked away our SATA ports and PCIe slots. We is forever doomed!:rolleyes:

The only thing that this new pro isn't superior to the old one is swappable graphics cards, which I see as a non issue to 90+% of pros buying this computer, as the cards that it has are going to be plenty for the next four years. And if you need a little boost, thunderbolt will give it to you. It seems nearly everybody here grossly underestimates the power and versatility of thunderbolt. As new tech always is, it will be expensive at first then price will go down over time. External upgrades will be the way to do most upgrades by the end of the next half decade. And as everyone else has said, 32bit geekbench results mean nothing when it doesn't take into account GPU power and is on beta hardware running beta software.

Whether Apple is ahead of the curve or is making this curve themselves, it is the way of the future.
 
My scores have improved since last time I posted them.

The New Mac Pro reminds me of the old SGI workstations, specially this one:

http://en.wikipedia.org/wiki/SGI_O2

However, I will say this. I like where the Mac Pro is headed, I think a 2-processor mac should be offered by Apple, as GPU software is just not here yet. Note that the above sentence is meant to say I am of the belief that Apple should not left Pros stranded for multi CPU power computing again and should include them in the catalog, whether they even bother to try offering one we will have to see.

I will probably not jump in on the 1st generation, just to wait out for Apple to iron out all the kinks (remember how the Mac Pro G5 was liquid cooled and used to leak? at least for some people after heavy usage...).

Now, my biggest gripe is that the old Mac Pros just have 2 6-pin PCIE connectors. If the new Mac Pro has 2 high-end workstation class cards, I would think it is safe to say that somehow, they would each get a 1 6-pin and 1 8-pin PCIE connector. Now, this is nowhere mentioned so far, but, I will keep my fingers crossed. This could mean a Titan could be fitted in there without the regular heat sink? I am exited until someone tries to tinker with one as I would like to see what happens! :D

1. The 2009, 2010, 2012 Mac Pros have one big fan built into the PSU, one big fan dedicated to cooling the PCIe cards, one big fan to suck air over the CPU/memory board, one big fan to suck air out of the back of the bottom of the case and a small fan to cool each CPU; so if you have a dual CPU that's six fans or if you have a single CPU that's five fans. Are we to believe that Apple has been wasting money in the past by using five or six fans? Definitely, not. While it is true that innovation requires thinking differently, but it does not require one completely going foolishly insane. Thinking differently in many aspects may require thinking like one did in the past in another, particular aspect.
2. Putting two CPUs in that tube would just exaggerate the problem. Sometimes one can innovate to the point that they have to think differently in every way and thinking differently may mean going back to thinking like one did once in the past, but not so far back as the Cube completely. One reason why, in the past, Apple didn't didn't give us Mac Pro systems with the absolute fastest CPUs was TDP related. They wanted to give us fast systems that were quiet and cool so that throttling didn't occur often. My suggestion to Apple would be not to abandon the notion that less is often more; so they need to use a Ivy Xeon with a TDP of 115 or less so that they can run one or more of the following CPUs full out:

Xeon E5-2640 V2 : 2.0 GHz, 20 MB L3, TDP 95 Watts (8-Core) (8 x 2 = 16 GHz)
Xeon E5-2660 V2 : 2.2 GHz, 25 MB L3, TDP 95 Watts (10-Core) (10 x 2.2 = 22 GHz)
Xeon E5-2695 V2 : 2.4 GHz, 30 MB L3, TDP 115 Watts (12-Core) (12 x 2.4 = 28.8 GHz)

because of TDP (only one system fan) and steady step up in speed.

3. In building my 2010 Hackintosh (see GB2 score of 40,100 in my Signature, below,) the issue I faced was thermal dynamics/throttling, i.e., how to keep those two Xeon 5680s cool, but run those CPUs fast. At first, I tried overclocking them greatly (but that just made the problem worse) and then I tried overclocking them less and less until I arrived at the idea of underclocking them to run them at speeds of under 2.5 GHz at idle, but magnifying their TurboBoost potential so that they had turbo bins of 13, 13, 13, 13, 14 and 14 for each 6-core CPU to give me the performance I need for rendering. Then I started to achieve Cinebench 11.5 scores in excess of 24.5 and since I use, among others, Cinema 4d - lighting fast renders. But this one fan nonsense has significant limits, especial when there's no external brick for the PSU because it's also in the cylinder. Like Officer Harry Callahan would say, "One's got to know his system design's limitations" and I'd add: "and work around them."

4. Titans cards would have to be modified to be useable; I like this (my Tyan-based WolfPackAlphaCanisLupus0 and am thinking about building another one):
 

Attachments

  • AlphaCanisLupusIntFulSide2.jpg
    AlphaCanisLupusIntFulSide2.jpg
    1.1 MB · Views: 112
Last edited:
Moore's law seems to be going backwards in terms of single core raw speed. 12 cores are nice but I'd rather have 6 with much greater speed for most things except maybe video and rendering work.

Moore's law refers to transistor size, that the amount of transistors you can fit on a die will double every 18 month. The reason everyone pretty much has stopped clocking their CPUs higher is that the heat would destroy them.
 
Has there been any speculation about what the max clock speed may be for the next generation of 12 core chips? I would hope that 2.7 won't be the highest, and if that's the case, that means they're comparing the highest of the previous generation to one that isn't the highest of the next generation.

I assume Apple's 2x floating point comparison is compared to the current 12x2.4Ghz model, looks like the top speed of the next one may come close to double that. Or maybe they're comparing the current base model to their next planned base model - moving from a quad to something better in the base is a good thing, and depending on whether the price goes up or down, it could make that switch either less appealing or more.

Also keep in mind that the top current 12 core MP costs $6199. If this new 2.7 ships and it "only" improves performance by 2000 points, but it comes in at a lower price point than that $6199, I'd say that's reason enough to make lots of users happy. Plenty of people are saying they'll just get the previous generation, but prices on those are unlikely to drop that much so it's hard to imagine many people paying as much or more for a slower machine (except for those few who truly need the PCI slots, and that's probably not that many).

Of course everyone would love to see a dual 8 or dual 12 core mac, but if they cost $8000 or $10000, would anyone here actually buy one, or is that just about bragging rights?

Apple also seems to be betting big on the dual GPU and openCL. If they optimize the heck out of all their apps to offload lots of processing to the GPU (and I'm looking at you, Logic 10), there will be a performance boost far beyond what a benchmark like this shows. If apps really start using openCL, maybe it's time for some updated benchmarking apps that use it as well (probably as a secondary number).
 
So we waited 1500 days for an 8% boost. :(

Apple must have put a lot of money into researching what actual users of the Mac Pro really want and it looks like a tiny form factor that looks like a dustbin came out above performance needs.

Strange world innit.
 
Is it possible that this is bogus? A hackintosh with info changed to look legit? Maybe even someone that got ahold of a new Xeon & built a system?

Just wondering.

Or perhaps they keep selling the old Mac Pro and just release (perhaps slipstream) an updated CPU into the machine, which this is??? If they decided to keep selling the old model, wouldn't it need some changes to sell in Europe, which might result in a "new" model??

Regardless, seems some people (as always) have concluded a lot from nothing, and condemned Apple's unreleased new product as a failure without waiting for reality to occur. Wouldn't it prudent to wait and gather more information before wasting so much energy on getting upset and angry over absolutely nothing? Isn't that what Chicken Little did? Are adults really only no smarter than the main character in a children's fable, and even less so because the fable taught them nothing whatsoever??!!
 
as the cards that it has are going to be plenty for the next four years.

That's really going to depend on the software developers. As of now, there are programs that perform much better with CUDA cards.

And if you need a little boost, thunderbolt will give it to you. It seems nearly everybody here grossly underestimates the power and versatility of thunderbolt.

I think there are just as many who overestimate it as well.
 
There are MacPros on Geekbench that also score 40,000+... doesn't mean they're valid.

I think the results are fine. The speed is about 2x my 8 core Nehalem Mac Pro, which works for me. And Geekbench doesn't even measure the speed of the GPUs, which was the point of the new Mac Pro.

Apple is replacing the CPU compute engine with the GPU compute engine. Geekbench really needs to look at incorporating GPU compute into their benchmarks. Some of the benchmarks should be perfect for GPU compute, such as Mandelbrot, dot product, Blur, and so on.

In fact, I don't even know why they're using those benchmarks in the CPU side, since most people perform those functions in GPU now.

Geekbench seriously needs to be updated to move those functions into the GPU.

I agree that Geekbench should take GPU performance into account, but many pro users still rely in CPU power. It's still where the majority of 3D rendering is done except for a select few renderers. With Dell and HP having 16, and soon 24 physical core models, Apple really needs to have dual processor models to compete in that market.

The good thing is that PCs seem to be inherently incompetent when it comes to dealing with the standard QuickTime format (I know from experience), so Apple will probably continue to have the video editing market. PCs don't have the ProRes codecs, and have a very difficult time encoding to QuickTime H.264 without failing.
 
Has there been any speculation about what the max clock speed may be for the next generation of 12 core chips? I would hope that 2.7 won't be the highest, and if that's the case, that means they're comparing the highest of the previous generation to one that isn't the highest of the next generation.

Not only speculation, but verification. 2.7 will be the highest 12-core chip.

----------

Apple also seems to be betting big on the dual GPU and openCL. If they optimize the heck out of all their apps to offload lots of processing to the GPU (and I'm looking at you, Logic 10), there will be a performance boost far beyond what a benchmark like this shows. If apps really start using openCL, maybe it's time for some updated benchmarking apps that use it as well (probably as a secondary number).

And that's where all of this really matters. With locked down hardware, a lot is going to be dependent on the developers optimizing their software for it.
 
The 2008 with 1 CPU was $2,299, a saving of $500 but the 2.8GHz E5462 CPUs they came with were $800 and the heatsinks were $100+. They didn't come down in price for years as the vast majority were inside Mac Pros.

Ah yes - I got an Apple Developer's discount on mine (remember those??), so I paid under $2K.

I actually replaced my single 2.8 with two 3.0 CPUs less than 2 years later, and only paid ~$400 total for both. They were pulls, but worked perfectly.

I still think that year was the sweet spot for Mac Pro value. And unlike the 2006, it has a 64-bit EFI. So hopefully that means it has a few more OS upgrades left in it.
 
That's how you over stress your equipment leading to premature failure.

I've been clock tweaking my systems since 1985 and currently have clocked tweaked Tandys, Apples, Dells, Ataris, Commodores, and an HP and DEC Alpha computers that all still run like a charm. I got > 5 clock tweaked Commodores (at least 20 years old), > 5 clock tweaked Ataris (at least 20 years old), > 15 clock tweaked Dells (about 10 years old) , a clock tweaked HP (eight years old) and > 30 clock tweaked Apples/Macs (some almost 30 years old) and others also clock tweaked. I've never had a computer to fail. As Phil might say,"Can't clock tweak my ass."

BTW: Those Apples/Macs include 2 clock tweaked Macintosh PowerBook G3 laptops that haven't failed. If one's willing to learn to always use that grey matter and always self-educate, then predictions of premature failures don't become the realities/outcomes. Fear and ignorance are choices I chose not to make.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.