Register FAQ / Rules Forum Spy Search Today's Posts Mark Forums Read
Go Back   MacRumors Forums > Apple Hardware > Desktops > Mac Pro

Reply
 
Thread Tools Search this Thread Display Modes
Old Jun 6, 2013, 12:12 AM   #1
wiz329
macrumors 6502
 
Join Date: Apr 2010
Thunderbolt bandwidth potential

I didn't want this to get lost in one of the threads of the new rumors about the new MP that's hopefully coming at WWDC.

There's a lot of new talk about a modular, stackable design, with everything linked together via Thunderbolt.

My question: is this even possible? It doesn't seem like TB would provide enough bandwidth.

Since the new version of Thunderbolt will only be PCI x4 as I understand, you couldn't even use it to connect more than 1 GPU at full performance, let alone connecting multiple CPUs (and RAM and everything else) and making them communicate and work in unison.
wiz329 is offline   0 Reply With Quote
Old Jun 6, 2013, 04:02 AM   #2
ElderBrE
macrumors regular
 
Join Date: Apr 2004
Nope, not possible right now.

It would be with external PCIe cables, but not with Thunderbolt.

Even yesterday's anouncement of Thunderbolt 2 for late this year would not be enough, let's see, if I am not wrong remembering the bandwidths:

Thunderbolt 1: 10Gb/s = 1.25GB/s
PCIe 4x 1.1: 8Gb/s = 1GB/s
Thunderbolt 2: 20Gb/s = 2.50GB/s
PCIe 16x 2.0: 64Gb/s = 16 GB/s
PCIe 16x 3.0: 128Gb/s = 32GB/s
PCIe 16x 4.0: 256Gb/s = 64GB/s

As you can see, if that modular thing is going to happen, it will not be through Thunderbolt. And there is no indication from the reports that it will be, it is just people guessing that.

It would be very cool, but you wouldn't need a Mac Pro to do that then, just pop a MacBook Pro on your desk, plug the modules cables and you're good. Need to leave? Shut off, unplug, and you have your system on the go. We will get there, but not yet...

Last edited by ElderBrE; Jun 6, 2013 at 04:56 AM.
ElderBrE is offline   0 Reply With Quote
Old Jun 6, 2013, 04:50 AM   #3
666sheep
macrumors 68030
 
666sheep's Avatar
 
Join Date: Dec 2009
Location: Poland
Quote:
Originally Posted by ElderBrE View Post
Thunderbolt 1: 10Gb/s = 1.25GB/s
Thunderbolt 2: 20Gb/s = 2.50GB/s
PCIe 4x 1.1: 8Gb/s = 1GB/s
PCIe 16x 2.0: 64Gb/s = 16GB/s
PCIe 16x 3.0: 128Gb/s = 32GB/s
Fixed that for you.
666sheep is offline   0 Reply With Quote
Old Jun 6, 2013, 04:52 AM   #4
ElderBrE
macrumors regular
 
Join Date: Apr 2004
Quote:
Originally Posted by 666sheep View Post
Fixed that for you.
Thank you sir, updating.
ElderBrE is offline   0 Reply With Quote
Old Jun 6, 2013, 04:57 AM   #5
robbieduncan
Moderator
 
robbieduncan's Avatar
 
Join Date: Jul 2002
Location: London
Depends on the number of Thunderbolt ports on each section? What if the master/CPU section has 8 thunderbolt ports?
robbieduncan is offline   0 Reply With Quote
Old Jun 6, 2013, 06:24 AM   #6
ElderBrE
macrumors regular
 
Join Date: Apr 2004
Quote:
Originally Posted by robbieduncan View Post
Depends on the number of Thunderbolt ports on each section? What if the master/CPU section has 8 thunderbolt ports?
You're still limited to the bandwidth, what you're suggesting is grouping them to achieve the necessary speed, which doesn't seem like a reliable solution.

I haven't seen any but I believe ePCIe cables exist that could do this.

Quote:
Originally Posted by ElderBrE View Post
Thank you sir, updating.
Fixing again according to PCI Express page on wikipedia:

PCIe 4x 1.1: 10GT/s = 1GB/s
Thunderbolt 1: 10Gb/s = 1.25GB/s
Thunderbolt 2: 20Gb/s = 2.50GB/s
PCIe 16x 2.0: 80GT/s = 8 GB/s

PCIe 16x 3.0: 128GT/s = 15.75GB/s
PCIe 16x 4.0: 256GT/s = 31.51GB/s

Last edited by robbieduncan; Jun 6, 2013 at 06:45 AM.
ElderBrE is offline   0 Reply With Quote
Old Jun 6, 2013, 06:46 AM   #7
robbieduncan
Moderator
 
robbieduncan's Avatar
 
Join Date: Jul 2002
Location: London
Quote:
Originally Posted by ElderBrE View Post
You're still limited to the bandwidth, what you're suggesting is grouping them to achieve the necessary speed, which doesn't seem like a reliable solution.

I haven't seen any but I believe ePCIe cables exist that could do this.
Why not? To my mind this is essentially how PCIe works. You have between 1 and 16x which is between 1 and 16 individual lanes grouped together.
robbieduncan is offline   0 Reply With Quote
Old Jun 6, 2013, 07:05 AM   #8
ElderBrE
macrumors regular
 
Join Date: Apr 2004
Quote:
Originally Posted by robbieduncan View Post
Why not? To my mind this is essentially how PCIe works. You have between 1 and 16x which is between 1 and 16 individual lanes grouped together.
When I first read it I was thinking you'd have to use several cables for a single GPU, for example. That, maybe, could be avoided. If not, you'd need 4 cables to reach the desired 16x for a GPU for example in that grouping example

However, aren't you limited to the number of TB interfaces you can attach to a single MB?
ElderBrE is offline   0 Reply With Quote
Old Jun 6, 2013, 07:10 AM   #9
robbieduncan
Moderator
 
robbieduncan's Avatar
 
Join Date: Jul 2002
Location: London
Quote:
Originally Posted by ElderBrE View Post
When I first read it I was thinking you'd have to use several cables for a single GPU, for example. That, maybe, could be avoided. If not, you'd need 4 cables to reach the desired 16x for a GPU for example in that grouping example

However, aren't you limited to the number of TB interfaces you can attach to a single MB?
You are limited to the number of interfaces supplied by a single interface chip. But you can have multiple chips on a board. So I don't see that having more than 2 would be an issue. Of course most (or all depending on the CPUs and or GPUs in the main host block) could not drive screens: they would be data only...
robbieduncan is offline   0 Reply With Quote
Old Jun 6, 2013, 07:19 AM   #10
ElderBrE
macrumors regular
 
Join Date: Apr 2004
Quote:
Originally Posted by robbieduncan View Post
You are limited to the number of interfaces supplied by a single interface chip. But you can have multiple chips on a board. So I don't see that having more than 2 would be an issue. Of course most (or all depending on the CPUs and or GPUs in the main host block) could not drive screens: they would be data only...
And for storage it could drive the limits well beyond what is being experienced right now...

And, if as the rumour says, you may have two GPUs in the main casing, you'd bet set for driving the monitors anyway.

It's not as modular as some people would wish, but it increases the possibilities for cards.

However for this case, wouldn't it be more simple to just upgrade the current model to offer TB and more PCIe slots or if you want the external capability, include one or two external PCIe cables than all the TB managing?
ElderBrE is offline   0 Reply With Quote
Old Jun 6, 2013, 07:26 AM   #11
robbieduncan
Moderator
 
robbieduncan's Avatar
 
Join Date: Jul 2002
Location: London
Quote:
Originally Posted by ElderBrE View Post
And for storage it could drive the limits well beyond what is being experienced right now...

And, if as the rumour says, you may have two GPUs in the main casing, you'd bet set for driving the monitors anyway.

It's not as modular as some people would wish, but it increases the possibilities for cards.

However for this case, wouldn't it be more simple to just upgrade the current model to offer TB and more PCIe slots or if you want the external capability, include one or two external PCIe cables than all the TB managing?
What if there are no slot PCIe slots? There are 40 lanes (say, I'm not actually sure) available to a modern motherboard. That's 10 thunderbolt interface chips or a total of 20 thunderbolt sockets. Push everything, drives, expanse, everything, into external blocks that you can stack at will. Could be an interesting concept. The closest I can think of is Acorn's RiscPC from the 90s with it's expanding case/slot bus.
robbieduncan is offline   0 Reply With Quote
Old Jun 6, 2013, 08:39 AM   #12
deconstruct60
macrumors 603
 
Join Date: Mar 2009
Quote:
Originally Posted by robbieduncan View Post
Why not? To my mind this is essentially how PCIe works. You have between 1 and 16x which is between 1 and 16 individual lanes grouped together.
Two reasons.

1. You are skipping over the fact that Thunderbolt needs DisplayPort (DP ) signals and PCI-e signals. So more Thunderbolt controllers means additional DispalyPort input required. For example for a dual output controller

Num
TB Controller ______ Input requirements.

1 __ 2 DP & x4 PCI-e
2 __ 4 DP & x4 + x4 PCI-e (not 8 but two independently switched 4's )
3 __ 6 DP & x4 + x4 + x4 PCI-e
4 __ 8 DP & x4 + x4 + x4 + x4 PCI-e

Good luck finding 6+ independent DP outputs on a small budget. Even 4 should provide interesting ( even the shift to DP v1.2 where multiples go out on single stream. )


2. Each TB controller in a PC Host would put out an independent switched PCI-e network. The data on one is not coupled (e.g., isochronous , synchronized , etc) to any of the other ones.

That is in complete contrast to what is pragmatically on a coupled x4 , x8 , or x16 PCI-e lane bundle.

The basic design intent is that each TB controller is in a separate box. Even if could put two controllers in box A they aren't really intended to be hooked to two controllers in box B. It is really box B1 and B2 that would be aligned with how Thunderbolt is designed to be used.


A lot of this hocus pocus disconnect from the Thunderbolts real design objectives occurs because folks start with the premise that "Thunderbolt is just external PCI-e .... so I going to lay things out like I was using PCI-e". It is not just external PCI-e. External PCI-e already exists. Thunderbolt is not a reinvention of the wheel. It is something different. With that difference comes different constraints.

Thunderbolt is a new protocol, not PCI-e or DP, that is used to transport those two's base traffic to a remote box where they can be decoded back into their native state and put back onto a remote switched network of that type. To the host system is appears a network of PCI-e/DP switches. That's its primary job to transparently look like a switch from the outside. So adding TB is largely layering switches onto the design.

If have huge increase in switches... that really isn't indicative that are getting more increased bandwidth. In contrast, that is typically indicative that are diluting bandwidth; not increasing it.

----------

Quote:
Originally Posted by robbieduncan View Post
That's 10 thunderbolt interface chips or a total of 20 thunderbolt sockets.
And 10 TB controllers would require 20 Display Port input streams. Where you getting those from? And how don't the producers of those also consume PCI-e inputs ?

TB doesn't scale this way at all ! On a host PC system it is pragmatically more than x4 worth of internal system bandwidth that is consumed to product all of the inputs that TB requires.

Last edited by deconstruct60; Jun 6, 2013 at 10:04 AM.
deconstruct60 is offline   0 Reply With Quote
Old Jun 6, 2013, 09:00 AM   #13
KBS756
macrumors 6502
 
Join Date: Jan 2009
What keeps apple from developing their own interface between these modular sections, or using something like the connector between the processor board and the motherboard in the current Mac Pro to connect modular parts?

I dont see why if you do go modular it would have to be thunderbolt
__________________
27" + 24" LED ACD; 2 x 3.33GHz X5680 Mac Pro, 16 GB RAM, 512 GB Samsung 830 SSD ,EVGA Geforce Titan Super-clocked; Early 2013 2.8Ghz 15in Retina Macbook Pro 16GB Ram 756GB
KBS756 is offline   1 Reply With Quote
Old Jun 6, 2013, 09:28 AM   #14
deconstruct60
macrumors 603
 
Join Date: Mar 2009
Quote:
Originally Posted by wiz329 View Post
My question: is this even possible? It doesn't seem like TB would provide enough bandwidth.
It is possible. It largely depends upon how to slice up the old Mac Pro. The current Mac Pro has 3 major thermal zones and a few sub areas within those zones. There are the top "5.25 bay + power supply" , " 3.5" sleds + PCI-e card zone" , and "CPU zone".

http://en.wikipedia.org/wiki/File:Si..._Mac_Pro_.jpeg
[ Mac Pro on Right, but G5 has similar philosophy. From Mac Pro wikipedia page. http://en.wikipedia.org/wiki/Mac_Pro. I'd use a cleaner Apple image but it will probably change (or break) over time.) ]

Let's ignore where the Power Supply will go and start chopping them off.

1. Chop off 5.25 bays and send to outside box. Requires external SATA controller in outside box but well within TB's abilities.

2. Chop off 3.5 drive sleds. ( there are several 4 bay external drive devices already on the market). Again well within ablities and could create a single box that consumes both of these slices.

3. Not so apparent in picture is front facing USB and FW sockets. Again chop these off and send to external box. Need to put additional USB and FW controllers in external box but well within TB's range.

So far have lopped off probably around 1/4 or so of the box.

4. Chop off two x4 PCI-e slots. Well frankly that might be were getting the x4 for TB controller so that's gone anyway. [ In most deployed TB designs so far the controller snags x4 lanes off the IOHub/Southbridge controller which has x8 to spread around. ]. In the current Mac Pro those two x4 are switched so really only have x4 worth of bandwidth there.

If go into hocus-pocus multiple TB controller design mode can make up how these are farmed off to 2nd controller since really are getting to state of dilution at this point if also shuffling this slots to an external PCI-e cage on the same TB network.

At this point have looped off probably over 1/3 or so of the box.

All of that is pretty feasible. It substantively increases total system cost to "reassemble" it but it could be done even if it doesn't make economic sense.
It wastes a small amount of money internally because the SATA controller have to buy for the CPU anyway has support for 6+ devices and now have none. Sure can slip in a single boot drive , but still wasting vast majority of SATA channels already paid for.

Is net overall completed system space or weight going down? Nope. All the external boxes require more power supplies. TB is not a modular power supply solution. Never was intended to be a major modular power supply solution.

Also each one of these external boxes has to take on the overhead of handling the "backward compatible with Display Port" that these subsystems don't need if kept inside an integrated box where DP has a dedicated separate path already.


Where the ultimate modularity design take a right turn into the swamp is where they start chopping off the components lower down on the current Mac Pro.

5. Start chopping off x16 slots and moving them outside. Well that means giving up x12 worth of bandwidth. All modern Intel options are PCI-e v3.0 and Thunderbolt is stuck at v2.0. So dropping a single x16 v3.0 is equivalent of dropping x32 worth of v2.0 bandwidth. So sure toss x28 lanes of PCI-e down the drain to go external.

Surprisingly pragmatically this will work for a fairly large number of folks. There are GPU workloads ( games , mainstream apps ) that are optimized for x8 worth of bandwidth. So dropping down to x4 doesn't as much as a killer as it would appear. The compression, caching, and coping workaround the apps have installed triage the loss of bandwidth to that it won't have as much of a dramatic drop of.

However can't chop off too many x16 connections because where are you going to get the Display Port output streams from????? And will the user really want to hook their display into this "smaller for smaller sake" network?


6. Start chopping off the interconnections between CPU packages and/or RAM. This gets to deep into the swamp. Here are the speeds that Intel and AMD use for CPU interconnect.

QPI (7.2GT/s Intel) 230 Gb/s
QPI (8GT/s Intel ) 256 Gb/s
HyperTransport 3.0 (AMD) 332 Gb/s
HyperTransport 3.1 (AMD) 409 Gb/s


In a dual Xeon E5 2600 set up the CPU packages are interconnected with two QPI links so double those Intel numbers above.

This is an order of magnitude off from even the "new and improved" TB v2.0 (20Gb/s as long as you not transport any video) that is a year away from now. That is just bandwidth. The latency overhead is off by a mile also.

The notion that TB is going to get you a unified system image cluster running at similar speeds is not going to happen at all. Not even close.

What folks used for cluster interconnect is in a whole another ball game. For example Intel's (former Cray's) Aries interconnect switch requires a full x16 PCIe v3.0 connection to the CPU and much higher than that between the switches. http://www.theregister.co.uk/2012/11...supercomputer/ Thunderbolt would be a cruel joke as a replacement.

Last edited by deconstruct60; Jun 7, 2013 at 09:14 AM.
deconstruct60 is offline   3 Reply With Quote
Old Jun 6, 2013, 09:42 AM   #15
ElderBrE
macrumors regular
 
Join Date: Apr 2004
That's enlightening deconstruct. Thanks for the input.
ElderBrE is offline   0 Reply With Quote
Old Jun 6, 2013, 10:00 AM   #16
goMac
macrumors 603
 
Join Date: Apr 2004
Quote:
Originally Posted by KBS756 View Post
What keeps apple from developing their own interface between these modular sections, or using something like the connector between the processor board and the motherboard in the current Mac Pro to connect modular parts?

I dont see why if you do go modular it would have to be thunderbolt
I don't actually think the modular design that's being implied here is what is being rumored. There probably isn't a "stackable" Mac Pro in the cards.
goMac is offline   0 Reply With Quote
Old Jun 6, 2013, 10:41 AM   #17
Tesselator
macrumors 601
 
Tesselator's Avatar
 
Join Date: Jan 2008
Location: Japan
Quote:
Originally Posted by goMac View Post
I don't actually think the modular design that's being implied here is what is being rumored. There probably isn't a "stackable" Mac Pro in the cards.
Ya, that's the fantasy of some magazine publisher. It's based on absolutely nothing but fantasy too. Not a single shred of evidence and no engineering principles were offered.

Is it possible? Yes of course, absolutely. If they did it via TB v1 some people would occasionally bump into the bandwidth limit but 99% of us 99% of the time wouldn't. If they did it by TB v2 that would be more like 99.9%.

GPUs don't ordinarily eat bandwidth. I guess you could have four GTX 780 cards on one TB v1.0 chain and almost never bump the limits. There might be something like three or four 1 min. periods during each week of 9 to 5 hard use where the limits are reached. Unless of course your main thing is running benchmark utilities.

The same goes for storage. Mostly storage limits are never reached no matter what applications you use but benchmark apps will. The ideal is between 600 to 800 MB/s burst and extremely low latencies where about ≥ 200MB/s with 4 to 32k files can be obtained.

IMO the average MP user could have 2-drive SSD RID0, 3-Drive HDD RAID0, TimeMachine fast 6TB, two GTX 780 cards all on TB v1 and never see the limits hit while using any of the Adobe applications. And if it did hit he limit it would only hit for a few seconds or so per day of heavy use. TB v2 would of course be even better.

But would such a (fantasy) modular system even use TB when card-edge full-spec PCIe is just as doable and probably a lot cheaper to manufacturer? Look at how Apple designs their machines starting from the larger 68K systems through the PPC models and even the Intel based systems... They love card edge connections. Heck, they even have the CPUs on card edge connections (kinda) and the first three MP designs had all the RAM on two different card edge connections.

If it's a stacked system why run cables? Have a little door in one box and a 4-point latching system with lever similar to how the MP side covers work now. Or even where one side of one box replaces the need for the side cover all together (could be top, bottom or side actually). Third party companies could even offer door and cable systems so that you could set them as you liked instead of stacked (butted) onto one another.

Of course like, I said above this is all fantasy. No one can say they will or won't, can or can't design such systems. We can't even guess the probability as we've all seen companies design themselves into bankruptcy or out of a particular market segment and Apple is no exception to this.

Last edited by Tesselator; Jun 6, 2013 at 10:50 AM.
Tesselator is offline   1 Reply With Quote
Old Jun 6, 2013, 11:34 AM   #18
09sroyal
macrumors member
 
Join Date: Oct 2012
Location: The Shire
You would also need power? Thunderbolt wouldn't be capable of powering each unit, nor will any other that I can think of. Other than having a separate power cable/supply in each one, which is not something that apple would consider, you would need one data connection and one power connection. Apple like to create simple products, so they wouldn't have you plugging in different cables for each unit. They would want them all to just fit together if this is the design they plan on using. It is also possible that they create their own connection that does it all...
__________________
2013 15" Retina MacBook Pro|iPhone 4S 16GB Black|iPad 3rd gen 32GB Black|Apple TV 3rd gen|@09sroyal
09sroyal is offline   0 Reply With Quote
Old Jun 6, 2013, 01:36 PM   #19
deconstruct60
macrumors 603
 
Join Date: Mar 2009
Quote:
Originally Posted by 09sroyal View Post
It is also possible that they create their own connection that does it all...
For use on a single product? Not very likely. Even if magically merge the Mac mini and Mac Pro just two product subset of a Mac line up? Why bother. The mini does OK by itself without it.

Much of this seems to be truly motivated by "don't like this about Mini" or "don't like that about iMac" so therefore mutate the Mac Pro to solve the problem. In contrast, to sending in feedback to mini/iMac product line on addressing the "problem" in future models.

They could make a quirky adapter that has TB connector section and a power backplane. These would be weaved into a lego block like adapter that two snap into to stack. It is alot gyrations to go through with very little cost effectiveness.

If Apple were interesting in selling a wide variety of customer lego blocks prehaps but they aren't about Byzantine BTO config options and complicated product inventory.
deconstruct60 is offline   0 Reply With Quote
Old Jun 6, 2013, 02:20 PM   #20
Tesselator
macrumors 601
 
Tesselator's Avatar
 
Join Date: Jan 2008
Location: Japan
Quote:
Originally Posted by 09sroyal View Post
You would also need power? Thunderbolt wouldn't be capable of powering each unit, nor will any other that I can think of. Other than having a separate power cable/supply in each one, which is not something that apple would consider,
Hehe, like you or anyone here knows what Apple would or wouldn't consider?

Quote:
you would need one data connection and one power connection.
Ya, just like SATA.
But beefier...


Quote:
Apple like to create simple products, so they wouldn't have you plugging in different cables for each unit.
All apple's machines are more complex than most other vendors. And who said anything about cables? What's that power socket on the back of your MacPro now? Do you think that NEEDS to be a cable plugged into that if daisy-chainning?


Quote:
They would want them all to just fit together if this is the design they plan on using. It is also possible that they create their own connection that does it all...
Yep, you can have many different kids of connectors molded into a single "plug-n-play" socket - including something like standard 3-pronged mains connector and a PCIe card edge, and... whatever...

Here's how Peter imagined it for example:


Last edited by Tesselator; Jun 6, 2013 at 03:01 PM.
Tesselator is offline   1 Reply With Quote
Old Jun 6, 2013, 07:42 PM   #21
tamvly
macrumors 6502a
 
tamvly's Avatar
 
Join Date: Nov 2007
I'd buy one of the above ...
tamvly is offline   1 Reply With Quote
Old Jun 6, 2013, 08:13 PM   #22
goMac
macrumors 603
 
Join Date: Apr 2004
Quote:
Originally Posted by Tesselator View Post
Here's how Peter imagined it for example:
There are a lot of technical reasons on why this is an insanely bad idea for a power machine. Every connector could introduce data degradation and electrical strain points. The more sections you add, the more unstable the machine would become.

For weaker, lower power devices, you might be able to pull it off. But a machine chucking all that bandwidth and power around? Not going to happen.
goMac is offline   0 Reply With Quote
Old Jun 6, 2013, 10:40 PM   #23
Tesselator
macrumors 601
 
Tesselator's Avatar
 
Join Date: Jan 2008
Location: Japan
Quote:
Originally Posted by goMac View Post
There are a lot of technical reasons on why this is an insanely bad idea for a power machine. Every connector could introduce data degradation and electrical strain points. The more sections you add, the more unstable the machine would become.

For weaker, lower power devices, you might be able to pull it off. But a machine chucking all that bandwidth and power around? Not going to happen.
Naw, I don't think so. It's not like the boxes would be hanging off in space or something. The hard drives, the RAM riser boards, and the CPU daughter cards all use very similar systems. You basically saying any one or all of those are unstable and " an insanely bad idea for a power machine" yet we all use them every day.

Last edited by Tesselator; Jun 6, 2013 at 11:13 PM.
Tesselator is offline   0 Reply With Quote
Old Jun 7, 2013, 12:33 AM   #24
ScottishCaptain
macrumors 6502a
 
Join Date: Oct 2008
Quote:
Originally Posted by Tesselator View Post
Naw, I don't think so. It's not like the boxes would be hanging off in space or something. The hard drives, the RAM riser boards, and the CPU daughter cards all use very similar systems. You basically saying any one or all of those are unstable and " an insanely bad idea for a power machine" yet we all use them every day.
Have you seen the CPU connector on the daughter card? There's over a hundred contact points on that sucker, plus several large ones for high current DC power. A single PCI-e 16x slot has over 82 conductors.

That "prototype" I keep seeing is so horribly thought out I don't even know where to begin with it. I can't stand industrial designers who throw out garbage like that and forget about all the physical implications of an idea just because it's not convenient for the design. Seriously, two connectors for both a high-speed data bus and high current power supply? He doesn't even address the latching system that would be required to solidify removable modules into a stable monolithic configuration. All it shows is a bunch of tiny latches that give you the impression things are supposed to hook together, without actually detailing how such a system would operate- the mechanics behind that kind of thing are not trivial to get right.

Apple excels at hardware design precisely because they know what their ideas entail as a whole while they're designing them. The Mac Pro's case latch doubles up and secures the ODD and disk drives as well as holding on the side panel. A design decision like that requires foresight and planning. I see none of that in the prototype posted above.

The more connectors you add to a system, the more unstable it becomes. You've gone from a solid configuration of a Mac Pro tower to a whole bunch of stacked modules. Let's assume that it was designed properly (realistically) and you had a 200 pin edge connector on each PCB that mated with the module to the top and bottom, plus another 20 pins for power. That's 220 individual connections that need to be perfect.

Add four modules and you've got 3*220 = 660 individual points that need to be electrically operable, otherwise your system will crash or fail to boot. Why the hell do I want that? People seem to forget that systems like the SNES and N64 used edge connectors for their game cartridges, and occasionally needed to be reseated because the system wouldn't startup properly. Do you really want to have to dismantle your modular tower on a monthly or weekly basis because something shifted a bit (vibrations from a DC fan or hard disk drive), causing one of those 660 connections to become intermittent?

-SC
__________________
2010 Mac Pro (MacPro5,1), 2*2.93ghz, 64GB, 4x2TB, Apple RAID Card, 5970 GPU, 2xSD, Eizo CG276W
ScottishCaptain is offline   3 Reply With Quote
Old Jun 7, 2013, 12:34 AM   #25
goMac
macrumors 603
 
Join Date: Apr 2004
Quote:
Originally Posted by Tesselator View Post
Naw, I don't think so. It's not like the boxes would be hanging off in space or something. The hard drives, the RAM riser boards, and the CPU daughter cards all use very similar systems. You basically saying any one or all of those are unstable and " an insanely bad idea for a power machine" yet we all use them every day.
Actually, that's a good example. The RAM riser boards and CPU daughter cards are all risky components. They work because there is basically only one connector. A stackable machine would have three or four connectors. It's not really doable.

This has been mentioned in other threads. Some machines are getting rid of CPU daughter cards and sockets due to high performance issues with them.

One socket is usually ok. The more you add, the more risky it gets. A machine that uses three or four connectors between components? Probably not do-able.
goMac is offline   0 Reply With Quote

Reply
MacRumors Forums > Apple Hardware > Desktops > Mac Pro

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Similar Threads
thread Thread Starter Forum Replies Last Post
CalDigit Thunderbolt Station Adds to Growing Ranks of Thunderbolt Docks MacRumors Mac Blog Discussion 169 Jul 6, 2014 05:29 AM
Caldigit Thunderbolt + Seagate Thunderbolt GoFlex Desk = 2 monitor MacBook setup? BonsaiBit Mac Peripherals 1 Feb 14, 2014 01:54 PM
Mid 2011 MacBook Air to Thunderbolt HDD to Thunderbolt->DVI adapter to Monitor mrcarl79 MacBook Air 7 Aug 22, 2013 03:47 PM
Tapping into Thunderbolt's Potential gm0bbq Mac Peripherals 7 Dec 3, 2012 01:12 AM

Forum Jump

All times are GMT -5. The time now is 05:15 PM.

Mac Rumors | Mac | iPhone | iPhone Game Reviews | iPhone Apps

Mobile Version | Fixed | Fluid | Fluid HD
Copyright 2002-2013, MacRumors.com, LLC