SSD will definitely be attached to the I/O Hub (chipset), not the designer configurable PCIe lanes in the PCIe controller.don't forget the storage drive is also now a direct PCI-e consumer. ( they may have grossly oversuscribed the IOHub with two TB controllers though. That wouldn't surprise me. )
So I still expect the lanes the designers had to work with are in a 2* 16x config for the GPU's, and remaining 8x lanes run to a switch, which is tied to the 3x TB2 chips.
Possible one of the TB chips is tied to the I/O Hub, but it wouldn't make as much sense to me vs. tying all of them to a switch, and using the I/O Hub for everything else that's not TB on the I/O panel (better use of the I/O Hub's PCIe lanes, as they'd choke anyway on everything that's connected running simultaneously - keeping a TB chip off of these would help with this issue, particularly given the SATA channel in use is attached to an SSD).
Not so much in my mind, as they're clearly interested in making their designs as small as possible. So based on that logic, I don't see skipping vast swaths of features in the I/O Hub a stretch at all.Really kind of a really head scratch design when the C600 chipset is sitting inside the same box with 10 SATA lanes available and Apple skips all of them to go to PCI-e storage card on even more switched PCI-e lanes . The IOHub is bought and almost 100% by-passed except for its x8 PCI-e lanes. (just one Ethernet socket and Audio I/O probably. ) USB controller and SATA controller completely idle.
It wouldn't save that much, but it's there, particularly on the case. SSD is at least as expensive as a larger mechanical HDD, so not a huge change there. GPU's are doubled up, so there's actually an increase for this aspect. CPU is what it is, for the selected socket size. Probably a bit on the PSU, but nothing drastic (assumes the system will only be a single socket, and the 12 cores is either due to virtual cores, or waiting for Xeon Ivy Bridge to ship).Well it looks to me like they just cut the cost of producing (manufacturing, assembling, and shipping) a "Mac Pro" in HALF or very near to it. So that's what I mean by saving money or as I actually said "cutting costs".![]()
Exactly. This thing is clearly aimed at trying to pull the enthusiast user back to Mac IMHO (HDMI port is a major clue).Yup, every new tech generates a lot of new purchases. A fact I believe Apple is counting on.![]()
I agree entirely.I don't think these specs are "ambitious" in any way. We could call them "modern" or "current" and not be wrong though.![]()
Even the cooling system isn't new. Been around since at least the '80's, particularly in power electronics. Even Krell used this back then as a means of cooling the power transistors in their amplifiers, so "new tech" ? Not by a long shot.
It's not a good fit for this IMHO, due to the necessitation of using TB to xx network adapter (10G Ethernet, FC, or IB), to get a faster network connection. And if it's attached to it's own DAS storage pool, the TB connections will be slowed even further, as they're ultimately sharing the same PCIe connection (switched in the TB chip if attached to the same one, or via a PCIe switch if each device is attached to a different TB chip; either way, you can easily run into a bottleneck in such a configuration).If I were still operating a render farm these might be perfect in fact - again, if the price is right. CUDA cores from dual GPUs (when they release an NVidia model), a very small footprint, looks like low power requirements, heck ya. Let's say 12 of these babies and a half a rack of used XServe gear as a file-server.
Given it's cooling configuration, I wouldn't be too sure on this. Especially if it's being run near, or at it's limits, as the fan would have to race up to a high rpm to push sufficient air flow over the aluminum extrusion.The new one, if it's quiet enough, could be a desktop machine.
Keep in mind, it's realistic for the cooling system to have to push out ~430W of heat under full load (figuring on 150W per GPU + 130W for the CPU at max TDP = 430W). That's a lot of heat for a single cooling system to have to move, and an extrusion isn't as thermally efficient as other cooling techniques, such as heat pipes + fins + forced air exchange system or plates (liquid cooling).
So only realistic way to avoid an overheating issue in their design, is by ramping the fan. Even at say 6 inches in diameter, you'll still be able to hear it due to the rpm necessary to dump that much heat.
Exactly.On the contrary, I think it's very much something Jobs would have signed off on. Apple has been moving to this for a long time.
Anyone recall the "trucks" statement Jobs' made when talking about desktops/workstations?