New Mac Pro thread (merged)

I like this new mac pro, won't be getting the 1,1 version but in 7 years when I have
-maxed out my ram to 32gb
-overclocked my processor a bit more
-updated the video card twice
-filled all 8 hard drive bays

and it still isn't doing the job as well as I would want it to I will go mac again. cause I tend to flip flop
 
I gotta believe there's a lot easier ways to make a computer cheaper than using virtually all non-standard parts to stuff inside a tube.

who says they are making the system less expensive ???

There is little to no indicators at all they are out to decrease the price. They are going from one desktop market GPU (with custom boot roms) to workstation classed GPUs. That is an substantive increase in cost in the Windows world. I highly doubt Apple is going to deliver that same higher class card in completely customized form at lower cost than the Windows version that is standardized and sells in higher numbers.

The super max 12 core 2600 that can't be used as a dual.... Probably a huge price tag ( $1700+ just for the processor without Apples 30% mark-up on it. ).

They can also goose the SSD minimum size to goose0 the system price higher too.
Brand new PCI-express technology SSD card .. again volume pricing? not.

Empty drive bays that got chopped off didn't really add much in material costs.

It probably will start close to the $2,000 border with the iMac. But is Apple going to price this to fratricide the iMac .... don't hold your breath.
 
After going through the slideshow I have one more thought: who put the power button in the BACK and in the middle of it as well??

Connections are bad enough although you can *grimace* use extension cables. With the power button there it pretty much HAS to go on the desk ... along with everything else that used to go inside. Even my windows towers at work have more sense than to do something that boneheaded.

Oh, I get it. Radical new thinking. Monitors go on the floor now.

Don't forget, it swivels. really, you just spin it around.
 
Looks no different than my Mac mini mockup from a year or so ago :D

attachment.php
 
Anyone else doubt that the Mac Pro was designed by Jony Ive? It doesn't look like anything that he's ever designed, and we didn't see him included in the Sneak Peek video for it today. I guess he would have to approve/sign off on it, since he is the Senior VP of Industrial Design. I wonder what it's made of, too...
 
Just cuz it's different doesn't mean it's better in any way. Weak Thunderbolt 2 bandwidth vs PCI-Express 3.0 tells it all. I rather they just make Mac Pro smaller like custom ITX/DTX-like form factor with dual cpu sockets or something, not this proprietary trash bin with zero expandability.
 
Anyone else doubt that the Mac Pro was designed by Jony Ive? It doesn't look like anything that he's ever designed, and we didn't see him included in the Sneak Peek video for it today. I guess he would have to approve/sign off on it, since he is the Senior VP of Industrial Design. I wonder what it's made of, too...

Stretched Aluminium.

Who designed it? I dunno... Maybe the new guy (Cook) said: Screw it, I've always wanted a round computer, let's do it!

:D
 
It is a PCI-e SSD; not a SATA one.
OK, I may have missed this one (haven't gone back and checked). :eek:

There is absolutely nothing attached to the IOHub's 10 SATA ports. Zip. Nada. It is just sitting there entirely idle. Same thing with the chipsets USB 2.0 implementation. Idle.
I'm not arguing otherwise. Don't see where the issue is on this. :confused:

If SATA is being dumped because it is "too slow" why also go "too slow" with PCI-e v2.0 lanes rather than use the E5's v3.0 lanes.
Possible, but I don't think it's as likely as using the I/O Hub (PCIe lanes, not SATA).

My reasoning behind my statements come from the fact that there's some pricey silicon in this one (2x GPU's that are by no means bargain basement), which would lend designers to compromise on other things, such as the complexity of any switch IC's to help meet the production cost targets they were tasked with. There's also the simplicity involved, and possibly supply issues (may be able to use more than one vendor for something that's more common to be sure the quantity pricing is in line with the production cost targets when everything is accounted for on the BOM).

Attaching the SSD to the I/O Hub would be another area that they can deliver a spec, and yet save on costs a bit (gen 2.0 specc'ed SSD controller rather than 3.0).

It adds up.

And Foxconn is has a history of doing these sorts of compromises IIRC, based on products with their own label on them.

Frankly, Thunderbolt is a mismatch to the E5's v3.0 lanes since it is stuck at v2.0. You would want to do as few as possible TB controllers since they'd just waste bandwidth.
This sort of thing hasn't stopped designers from doing in this in the past though, and I've no reason to presume anything has fundamentally changed.

Products are designed around development schedules and budgets, and the accountants will make sure these targets are met, regardless of what the engineers would prefer to do.

( I don't think the build in audio needs one. ).
I don't disagree (not necessary regarding bandwidth), but keep in mind that the I/O Hub no longer contains PCI to shift it to. Intel's gone purely PCIe now.

On the E5 assignment could do a

TB controller --> x4 ( throttled to v2 )
TB controller / PCI-e SSD ---> switched on a x4 ( presuming can flip back to v3 when talking to SSD )
GPU --> x16 v3
GPU --> x16 v3
I see this as likely seen by management as overly complicated and too expensive given the aforementioned conditions.

If the switch has to be in constant v2 mode then doesn't make much difference. The SSD is a bit throttled.
This is what I actually suspect is the case, but rather than the SSD being throttled (gen 3.0 SSD on gen 2.0 lanes), it's the lanes directed to the TB chips that will end up throttled, assuming Gen 2 TB runs on gen. 2.0 PCIe lanes, which IIRC, isn't the case.

Now consider that TB is a fundamental, and important feature due to the lack of internal storage capabilities, I don't expect they'd have wanted to throttle the TB chips any more than they absolutely had to (and meet both development and budgetary concerns).
 
From another thread here.

Thunderbolt 1: 10Gb/s = 1.25GB/s
PCIe 4x 1.1: 8Gb/s = 1GB/s
Thunderbolt 2: 20Gb/s = 2.50GB/s
PCIe 16x 2.0: 64Gb/s = 16 GB/s
PCIe 16x 3.0: 128Gb/s = 32GB/s

Apple won.

They didn't like us using SSD/HD and heaven forbid GPUs and PCIE cards from Fry's & Newegg unless they were getting a taste. And so it shall be.

For all the people saying TB = PCIE, read those numbers again. Apple just convinced you that 30 = 3. Or put another way, you are welcome to update your new Mac Pro, it will just cost you a snazzy new TB enclosure, a cable, an extra power supply and you'll be right at 1/8 the speed you could have had in an old Mac Pro using a PCIE card that didn't need any of those "features".

And yes, it does look like a trash can. And in 3 or 4 years when it is outdated and the 2 or 3 GPU choices Apple offers for $800/ea won't work, you be very glad for this final feature, it will be "repurposed".
 
From another thread here.
Thunderbolt 1: 10Gb/s = 1.25GB/s
PCIe 4x 1.1: 8Gb/s = 1GB/s
Thunderbolt 2: 20Gb/s = 2.50GB/s
PCIe 16x 2.0: 64Gb/s = 16 GB/s
PCIe 16x 3.0: 128Gb/s = 32GB/s

Math is off -

64 Gb/sec = 8 GB/sec
128 Gb/sec = 16 GB/sec

FWIW: the PCIe flash storage is fast as all hell unencumbered by the SATA interface - see something like FusionIO for benchmarks.

The GPU cards on the new machine look replaceable, just not by reflashed PC cards - so yea further vendor lock in there, but it's certainly upgradable.

Given a 4K video stream can be pushed across a TB2 connection - what the heck are you doing that's going to need higher external bandwidth ?

The machine looks like it will support 3+ 4k streams if you want it to....

[Edit] Ahh ok - you sell reflashed cards....
Yea I can def. see this cutting into your business go forward - but I think saying it's not upgradable or deficient in some way is a little unrealistic - the machine as specced will be a beast.
 
Last edited:
Possible, but I don't think it's as likely as using the I/O Hub (PCIe lanes, not SATA).

Every reference design Intel has put out has had the TB controller attached to the I/O Hub.... all of them. If there is a cannonical place where it goes. Generally, that is where Intel tells you to put it.


My reasoning behind my statements come from the fact that there's some pricey silicon in this one (2x GPU's that are by no means bargain basement), which would lend designers to compromise on other things,

I doubt Apple is going to compromise on anything because they are going to want to keep the cost up. It is a 1/8 what it used to be phsycially. I doubt they are looking for 1/8 the price. The 2 GPUs just make the device tread water as far as price goes.

such as the complexity of any switch IC's to help meet the production cost targets they were tasked with.

TB controllers are a switch ICs. That is one of its primary jobs.

Unless Intel boosted TB v2 ( Falcon Ridge ) controllers up to PCI-e v3 ( the lack of aggregate bandwidth expansion suggests that they only reshuffled the deck chairs so that the PCI-e is the same), that would make a difference in location.



Attaching the SSD to the I/O Hub would be another area that they can deliver a spec, and yet save on costs a bit (gen 2.0 specc'ed SSD controller rather than 3.0).

Again I don't think they are looking for a more cost effective SSD. They could have used a single SATA lane and reused the SSD from the much higher volume rMBP in here. The volume discounts are bigger than kneecapping the singular storage device in the box.



And Foxconn is has a history of doing these sorts of compromises IIRC,

Designed in the USA and fabricated in the USA doesn't really scream Foxconn to me.



I don't disagree (not necessary regarding bandwidth), but keep in mind that the I/O Hub no longer contains PCI to shift it to. Intel's gone purely PCIe now.

Typical reference designs throw in a switch on a x1 line to supply diluted data to a legacy PCI slot/connection. However, all Intel Chipsets, even the C600 one has HD Audio built in. I'm not sure why it would need any external PCI or PCI-e connection to get its work done. However, the built in 1GbE connection does require a x1 PCI-e connect to connect up the PHYS implementation for some reason. I think most of the smarts are present in chipset and it is perhaps some of the offload invovled.

In this block diagram Audio doesnt' require any.
z77.jpg


http://www.anandtech.com/show/5884/...s-part-2-intels-dz77rek75-asus-p8z77v-premium

I think Apple has just bought into chipset HD Audio at this point. So they really don't need a PCI slot at all or a switched x1 line. But even if they did could throw that in with the second Ethernet socket (which a decent sized set of folks won't use).


but rather than the SSD being throttled (gen 3.0 SSD on gen 2.0 lanes), it's the lanes directed to the TB chips that will end up throttled, assuming Gen 2 TB runs on gen. 2.0 PCIe lanes, which IIRC, isn't the case.

I'm a bit skeptical that is the case. As I said Intel really did not increase the aggregate bandwidth with TB v2.0 . That allowed larger video traffic onto the network. With zero increase in bandwidth it makes no sense to also increase the PCI-e data maximum also. They will end up with more data coming in than they can transport. All TB v2.0 does is reshuffle the deck chairs.

If there is no heavy video traffic that the larger 20Gb/s headroom allows for better congestion control ( and likely latency ) on 12 device chains with varied PCI-e traffic.


The other major problem is that the desktop/mobile implementations don't have "extra" v3.0 lanes. That is the VAST majority of Thunderbolt deployments. As I said the canonical place to attach the TB controller to the IOHub ( z77 , z87 ... which are in same boat as C600 ). As long as it has only x8 PCI-e v2.0 lanes moving TB controller is to v3.0 isn't really doing much. TB gets no faster. Likewise in the perherihals lots of folks hooking up the x1 v2.0 controllers that were punted from the host system... they are stuck at v2.0. What is v3.0 going to do for them??? Discount, affordable discrete SATA controllers .... v2.0. etc. etc.
 
As interesting as the new Mac Pro is, for me, the biggest plus from today was OS X 10.9 treating multiple displays separately

Gonna be sooooooo good.
 
who says they are making the system less expensive ???
Well first off, it's not me that believes this will be 'cheaper' in any way.

Second, by cheaper, I don't think even the person I was responding to means that in terms of less expensive for the buyer. I'm certainly using the term cheaper meaning: for Apple to manufacture, in order to sell for an even higher profit.
 
I haven't caught up with the whole posts, but won't Nvidia and ATI come out with the new Macpro designed GFX cards it looks like they click in vertically? Further more if this is true maybe other manufacturers can make designs that work.

Anyway I'm super excited, waiting for pricing, sold my 2008 Macpro for $1200 aud Houston last week, I pulled the trigger at exactly the right time.

I'll probably just wait until the new MBP get released anyway, I don't have much use for all the power to be truthful any more. I'm barely doing any editing t home (too much at work!) which is a good thing I guess.

Avid will be pretty pissed off with the move!
 
Ignorance.

I am not terribly technical. I am a creative. I built a Hackintosh (more of a Hac Pro) very easily. I used NoFilmSchool's guide to choose parts and install software. Easy.

Spending hours on end browsing the hackintosh forums to see if they work properly is ignorance?

As that guide specifically mentions LGA1155 and NOT LGA2011 I'm not surprised it works better.

As I said, find me a guide on X79 that works easily and doesn't have issues and ill buy it.

The fact of the matter is, for convince and the fact it will work out of the box with no buggering about, buying a hackintosh isn't the way forward.
 
From another thread here.

Thunderbolt 1: 10Gb/s = 1.25GB/s
PCIe 4x 1.1: 8Gb/s = 1GB/s
Thunderbolt 2: 20Gb/s = 2.50GB/s
PCIe 16x 2.0: 64Gb/s = 16 GB/s
PCIe 16x 3.0: 128Gb/s = 32GB/s

Apple won.

They didn't like us using SSD/HD and heaven forbid GPUs and PCIE cards from Fry's & Newegg unless they were getting a taste. And so it shall be.

For all the people saying TB = PCIE, read those numbers again. Apple just convinced you that 30 = 3. Or put another way, you are welcome to update your new Mac Pro, it will just cost you a snazzy new TB enclosure, a cable, an extra power supply and you'll be right at 1/8 the speed you could have had in an old Mac Pro using a PCIE card that didn't need any of those "features".

And yes, it does look like a trash can. And in 3 or 4 years when it is outdated and the 2 or 3 GPU choices Apple offers for $800/ea won't work, you be very glad for this final feature, it will be "repurposed".

That's bad news for video cards, but what else uses anything besides PCIe 4x? Any capture card, Audio card, Fibre Channel card, or RAID array will not even saturate Thunerbolt 2.0. Stop FUDing. And don't reply to me with 2 devices that can.
 
I'm certainly using the term cheaper meaning: for Apple to manufacture, in order to sell for an even higher profit.

Like Apple needs to goose the Mac Pro margins higher for extra money so they aren't broke.

I don't think it would be any higher than it was. The current Mac Pro has huge margins (reltaive to mainstream PC industry). If Apple takes out some stuff to make it smaller they will need tp put back in other smaller more expensive items to tread water on price and margins.

Less expensive to ship sure.... but this isn't getting the bill of material cost down.

Typically if want to make something cheaper to manufacture you leverage standard parts which are cheaper do to higher volume. This device goes in the opposite direction. multiple custom boards. custom thermal management. A fan that only fits in this single device. Workstation GPUs instead of mainstream ones. ......

If they sell enough this custom work isn't going to be prohibitive high but not really the optimal path to cheaper components. Fewer slightly more expensive parts is on track to tread water on system costs.
 
I like the esthetic design, but not its consequences.

* Using external storage - no big problem. Today I have storage in several raid enclosures. Extra TB 2xPCLe-extension boxes ($700 a piece?), FW-adapter and optical enclosure soon takes gain space of a smaller MacPro-

* The 2 x GPU solution. As mentioned it looks a bit weak. 2 cards only getting 3 times the performance of the current single GPU (5770)? And no CUDA. Another TB box?

* Single CPU. Takes away the main reason for using Xeons. A significant disadvantage compared to Dell/HP 2xCPU workstations. This of course goes for reduced number of memory banks as well.

* No 10-gigabit ethernet. Yet another TB external box?

* I am, as others, seriously concerned about the TB bandwidth.
- Fast work and main raid storage
- Backup raids
- FW
- 10-gigabit net.
- Extra CUDA card?
- 4k displays
I can´t imagine the TB bandwidth wont be an issue.

As for the "twist-and-see-back" feature - have anyone tried to twist anything with 10+ cables attached? My Macpro with 15 cables attached is not very agile in that respect.
 
I wonder how long it will be before some witty parts manufacturer figures out that the machine itself will make a perfect monitor stand? Just need a belt, an articulated arm, and a 4-hole mounting bracket. :D

They should have made it round like a beachball. Then it could spin every time the OS is too busy to answer a request.
 
Naw, it's all about number of screws, number of pieces, and number of solder pads. Shape doesn't matter much. In fact this tube shape is actually easier to work with - from what I know about FA (factory automation).



Apple pays to ship the parts to their assembly plants, then to a warehouse after completed, then to stores or airports, if the later then also to warehouses and/or stores when they land. You only have to pay for the very last leg of distribution (from it's stored location to your front door).



No, no. Again it's all about the number of pieces, number of screws, number of cables needing to be attached, and so on. Look again.

  • Where are the wiring harnesses? Vastly reduced!
  • Where's the drive backplane? Gone.
  • Sleds? Gone,
  • 6 SATA headers, 4 SATA power headers? All gone,
  • GPU cards, No longer hand fitted. Also probably manufactured internally and without the need for the power connectors, cables, heat-sinks, or fans.
  • ODD drive doors? Gone!
  • ODD cage? Gone,
  • Rear expansion slot covers and screw? All gone,
  • CPU heat-sinks, GPU heat-sinks, Chipset heat-sinks? All morphed into one and set up so that a robot can do it FA style.
  • The 3 different port-out PWBs? Became one - even the AC power looks surfaced onto that same singular PWB,
  • The card edge headers? Gone. this reduces a lot of expense in just this single exclusion.
  • Third party fans? Only one is needed now. There are four on my MP and all are purchased from a 3rd party supplier.
No, totally bro. They have streamlined both cost and production on this new system - maxed it out. As I said elsewhere the new MacPro is an engineering feat and a half! and almost all of that engineering was for Apple's own benefit == lower costs all around!



Yup, a lot of intelligence has been applied to this machine. Now the only thing is to wait and see if it ultimately pays off for them.



Yeah, that's a question I have as well. If they don't intend to offer a dual CPU configuration then why use Xeon? It's useful in this design for something I'm spacing off or?

From an engineer's point of view of which I am, it is a design and engineering feat. Reducing parts means reducing costs. And that includes cost on materials, cost on time, cost on labor. Now instead of using 5 people to fully assemble it, you can use just one because of less parts to put in.

Reliability. Less cable interconnection results to better reliability.

The thermal core is a good design, by using only a single fan it reduces the need for more energy to cool it down. Hot air always goes up, pulling cooler from below is a smart solution. There are some other things that should merit a whole new discussion.

Lastly, environmental impact. Less energy required, less material needed, less impact on the environment.
 
I'm not a pro but I have a massive itunes library and video/photos to store. I was interested in the new mac pro because of the potential swapable hard drives etc. I hate having externals all over the place.

the cost of externals will make this mega expensive plus you will need extra externals for back ups as well.

plus the amount of plug sockets/surge protectors you would need for. I'm in a small home office and I already have 10 things plugged in around my desk.. so would hate to see what a busy production suite would like..

Now that its all just external I might as well just stick with my iMac.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.
Back
Top