Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

MacDK86

macrumors newbie
Nov 25, 2010
12
0
Seems like the new Macbook Pro is just around the corner :)
 

Attachments

  • Macbook Pro 2012_PS.jpg
    Macbook Pro 2012_PS.jpg
    721.6 KB · Views: 228

KnightWRX

macrumors Pentium
Jan 28, 2009
15,046
4
Quebec, Canada
Imagine a common and practical set-up. A PC, an external HD, two external monitors. From what I understand the two monitors and the PC are endpoint devices in a Thunderbolt setup, thus requiring you to compromise and put one of the devices on another slower port.

Rocketman

DisplayPort 1.2 supports daisy chaining monitors over the same DisplayPort connection. So does Thunderbolt if using Thunderbolt enabled monitors.

So what you understand does not take into account existing technology.
 

JWCOMBS

macrumors newbie
Apr 12, 2012
1
0
Imagine a common and practical set-up. A PC, an external HD, two external monitors. From what I understand the two monitors and the PC are endpoint devices in a Thunderbolt setup, thus requiring you to compromise and put one of the devices on another slower port.

Rocketman

The current Thunderbolt display is daisy-chainable. Many of the hard drives/raid arrays are as well. I'm running 2 TB displays and a raid array so I have 3 open TB ports.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,298
3,893
The arrival of the new Thunderbolt chips lines up nicely with Intel's Ivy Bridge processors to set the stage for updates to a number Apple's Mac lines.

That's a nice coincidence, but given these TB controllers work just as well with Sandy Bridge and Intel CPU-less devices, it is a bit of a stretch. New TB chips are due because the older TB chips are over year old and due for replacement. It is 'nice' that it aligns a bit for new Ivy Bridge designs that will be coming online but it wasn't particularly necessary. Nor particularly desirable for those who were looking to get Ivy Bridge designs out the door back when they were earmarked to launch in Q1 '12.

The Core i Sandy Bridge and Ivy Bridge CPU models that are capped at 20 PCI-e lanes both benefit from the PCI-e 2x limited controllers since they are highly PCI-e lane constrained. That constraint is just as tight for Sandy Bridge as it is for Ivy Bridge. [ Apple's less so because they tend not to grossly oversubscribe PCI-e lane bandwidth with every option imaginable. ]


The Port Ridge is targeted at "chain ending" devices which, like most TB peripherals, extremely likely do not have any Intel CPUs in them at all! That's is not particularly relevant at all to the Ivy Bridge roll out.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,298
3,893
and the PC are endpoint devices in a Thunderbolt setup,

I haven't seen Intel's official definition of an "endpoint" device but I suspect that PCs are not them. The PCI-e controller in the PC is more the "root" of the PCI-e tree that is routed through the TB chain, than an endpoint. I suspect "endpoint" is likely more techically classified as as "leaf" node than simply being at a chain boundary.

TB sets up a network of PCI-e switches. The switch in the personal computer (PC) is pragmatically more dominate than the downstream switches. It may not be as rigid as USB but its likely USB-like in that respect when bring the PC boot context into the pragmatic implementation issues.

Port Ridge is aimed at more inexpensive dongles that terminate a TB chain than serving as the hub for one. For instance if put a TB Display downstream from a PC with Port Ridge there would be no display port traffic on the wire. Chain ending non display dongles don't present a problem if they don't propagate video because they are at the end. Similar impact to putting a legacy Display Port device at the end of the TB chain.... end of TB chain so not a problem if TB properties don't continue because there is no where else to go.
 
Last edited:

Sackvillenb

macrumors 6502a
Mar 1, 2011
573
2
Canada! \m/
pc manufactures using thunderbolt

Well, since this article suggests that "pc" manufacturers may start using thunderbolt... that would be REALLY good news, if true. Thunderbolt needs mainstream support so badly it's not even funny. Potentially useful hardware, but with rare and expensive peripherals, how useful is that. Yes I know a lot of people complain about that, but that's because it's a legitimate problem!

Hopefully we'll start seeing thunderbolt on pc's, especially since Intel should be backing this technology... and then maybe we'll see more (and more reasonably priced) thunderbolt peripherals! :)
 

repoman27

macrumors 6502
May 13, 2011
485
167
The VR-Zone article which this post quotes contains several references which are off the money.

For one, the only Mac to ship with a Thunderbolt controller that had less than 2 DisplayPort sink connections is the 2011 MacBook Air. All the rest used Light Ridge even if they only offered a single Thunderbolt port in order to allow for multiple display support. The 2011 Macs feed both the DP inputs on the Thunderbolt controller from the discrete GPU if they have one, or the integrated one if they don't.

The four-channel Cactus Ridge controller still comes in two flavors, and Apple will most likely continue to use the low power version of this chip for the MacBook Pros. What isn't so clear is what the MBA will use. The Cactus Ridge 2C variant is larger and draws more power than Eagle Ridge, but offers two less PCIe lanes. There doesn't seem to be a lot of win with that chip.

The VR-Zone article also continues to propagate the Thunderbolt for iOS devices nonsense. When Apple comes up with an SoC that has either PCIe or DisplayPort connections, and Intel comes up with a Thunderbolt controller that draws less power than an iPad running at full tilt, this might make sense.

I wonder what the source of the chart is? It offers those not under NDA more concrete data about Thunderbolt than we've yet seen to date.

I also wonder if the PCIe lanes on the new controllers support 3.0 speeds. The step backwards to 2 lanes for DSL3310 and DSL2210 would prevent them from fully utilizing a 10 Gbps Thunderbolt channel unless they are 3.0. And if they are 3.0, the DSL3510 could be a monster capable of pushing 20 Gbps of PCIe packets, although I highly doubt this is the case. I wish we could see that first footnote.
 
Last edited:

ruftytufty

macrumors member
Jan 4, 2005
96
1
Berkeley, CA
Has anyone seen any announcements regarding components being released with the purpose of reducing the cost of Thunderbolt cables (source of one of the many complaints about Thunderbolt)? Just copper wire implementation for now - looks like optical in any form is still a ways out.

Each TB cable from Apple includes, in ifixit's words, "12 larger, inscribed chips, and tons of smaller electronic components". That's obviously going to make it a relatively expensive, as well as bulky, cable.

We've seen many technologies start out implemented using a bunch of components, and thus fairly expensive, and later become much cheaper and smaller as special integrated components are developed that encapsulate the functionality in a single component (or a few) - radios, wired networking, wireless, USB, etc. I expect that this will happen with TB as well, just a matter of when.
 

jburns

macrumors regular
May 1, 2007
166
11
NC-USA
the tread functionality is to add to the discussion, not to tell us when you last jerked-off to the thought of a new mac.

Now we know why you live on an island.

----------

I bet Mac Pro is going to be the first updated Mac they release this year.
░░░█░░░░░░▄██▀▄▄░░░░░▄▄▄░░​░█
░▄▀▒▄▄▄▒░█▀▀▀▀▄▄█░░░██▄▄█░​░░█
█░▒█▒▄░▀▄▄▄▀░░░░░░░░█░░░▒▒​▒▒▒█
█░▒█░█▀▄▄░░░░░█▀░░░░▀▄░░▄▀​▀▀▄▒█
░█░▀▄░█▄░█▀▄▄░▀░▀▀░▄▄▀░░░░​█░░█
░░█░░░▀▄▀█▄▄░█▀▀▀▄▄▄▄▀▀█▀█​█░█
░░█░░░░██░░▀█▄▄▄█▄▄█▄████░​█
░░░█░░░░▀▀▄░█░░░█░█▀██████​░█
░░░░▀▄░░░░░▀▀▄▄▄█▄█▄█▄█▄▀░​░█
░░░░░░▀▄▄░▒▒▒▒░░░░░░░░░░▒░​░░█
░░░░░░░░░▀▀▀▄▄▄▄▄▄▄▄▄▄▄▄▄▄​▄▀

Wow! The new screen looks great.
 

iBug2

macrumors 601
Jun 12, 2005
4,531
851
Well, correct me if I'm wrong, but it's TP supposed to be capable of closer to 10GB speeds instead of a mere 10MB?

Opps, that was meant to be 200Gbit not Mbit ofc. PCI Express 3.0 is 128Gbit/sec so 100Gbit would barely cut it in a few years, 200Gbit will be needed to drive the modern GPU's then.
 

lifeguard90

macrumors 6502a
Aug 25, 2010
620
0
Chicago
This will not likely happen and I disagree, Lion and Snow Leopard did not come out on the same time as a hardware piece. First off this has never happened before, or at least in recent years. Also, this would mean to many bangs in one day, no spotlight over the summer for other products. Let alone people buying a mac and then having to upgrade again , more $$. Apple will not let the iMac go over 12 months , i would expect an iMac and 15" to come anytime from the 25th through WWDC, which I wonder when apple will announce wwdc, personally I peg right around their financial call. Lastly, imo, I think Apple has the new chips, has had them or had enough pull to get intel to move up their ivy release as to crank items out faster





Maybe Intel is ready to ship slightly ahead of schedule. Hard to tell whether this means slightly earlier Macs though. They may be looking to align these new desktops and laptops with the release of Mountain Lion, which is announced for "summer". Especially if the timeframe between their earliest ship dates for these computers and the new OS version ends up being just around one month or so. Makes more of a bang if they can announce a new OS X along with new Macs.
 

thekev

macrumors 604
Aug 5, 2010
7,005
3,343
I demand some cheaper cords, external GPUs, more affordable external hard drives, and developer kits delivered to third-parties so the industry as a whole can benefit from these super-awesome thunderbolt speeds!

I wouldn't expect it to be truly cheap anytime soon, and external gpus are likely to be a flop.

Hopefully we'll find out soon what the fate of the Mac Pro is. I think there were rumours that Apple was waiting for updated Thunderbolt before refreshing the Mac Pro workstation perhaps for the last time.

It's always rumored to be the last time. It could be, yet the speculation doesn't actually change that. Remember the rumor that the macbook pros would be made redundant?

Hopefully we'll start seeing thunderbolt on pc's, especially since Intel should be backing this technology... and then maybe we'll see more (and more reasonably priced) thunderbolt peripherals! :)

Intel planned this part from the beginning. Lightpeak just debuted as thunderbolt using Apple's connector. I want to see if the mini displayport implementation actually holds out and becomes the norm.

This will not likely happen and I disagree, Lion and Snow Leopard did not come out on the same time as a hardware piece. First off this has never happened before, or at least in recent years. Also, this would mean to many bangs in one day, no spotlight over the summer for other products.

I could see something like that with the mac pros as a lower volume line to cut costs of testing such a machine for an OS that is about to be replaced. It would make for a slower start given that the people who buy them probably do not update their OS the first day a new one comes out due to the need for bug fixes and assurance that all of their applications will run as expected.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,298
3,893
The four-channel Cactus Ridge controller still comes in two flavors, and Apple will most likely continue to use the low power version of this chip for the MacBook Pros. What isn't so clear is what the MBA will use.

There is nothing here that Intel is completely removing the previous chips from the market. The MBA could possible get by with what it is using now for another year. Otherwise, if the next design revision opens up a little bit more motherboard room then can just use the cheaper Cactus Ridge. Eagle Ridge (used no in MBA) is 8mmx9mm. 12mmx12mm is only 4mm and 3mm bigger. millimeters folks. (versus 7 and 6 mm increases for Light Ridge ). That also meshes quite nicely with Apple's "use common parts across as money models as possible" approach to design.

The Cactus Ridge 2C variant is larger and draws more power than Eagle Ridge, but offers two less PCIe lanes. There doesn't seem to be a lot of win with that chip.

If it is cheaper that is probably a big win for most system vendors. Cactus Ridge will likely sell in higher volume that Eagle Ridge (basically a 4C model with some internal features either turned off or binned because didn't work . That means same core R&D costs amortized over all variants and comes off t the same production line.). The difference in power ( 0.25W) isn't going to cause many to loose sleep.

The VR-Zone article also continues to propagate the Thunderbolt for iOS devices nonsense. When Apple comes up with an SoC that has either PCIe or DisplayPort connections,

That isn't even the issue. USB is more ubiquitous than Firewire and TB put together. The vast majority of iPhone, iPod , iPad devices are not hooked to Apple's more classic shaped personal computer. They are hooked to the still much larger than those iDevices put together mainstream PC market.

If anything it is USB 3.0 that iOS devices could move to but the huge inertia of proprietary USB oriented port of the current devices even impedes that.





I also wonder if the PCIe lanes on the new controllers support 3.0 speeds.
The step backwards to 2 lanes for DSL3310 and DSL2210 would prevent them from fully utilizing a 10 Gbps Thunderbolt channel unless they are 3.0.

No. Because TB would have to get a speed increase to transport PCI-e v3.0 with little to no increased latency. Limited to 2 PCI-e lanes and keeping the 10 Gbps TB channel just means that latency issues (distance plus traversing multiple TB controllers on a longer chain) are less of a problem. If try to put 10Gbps of PCI-e v2.0 traffic onto TB's 10Gbps throughput you are going to have problems. TB's protocol overhead is lightweight but it isn't zero. Neither is the additional distance propagation issues. 8Gbps ( 2x v2.0's 500MB/s ) leaves better headroom.

TB is implicitly set up for two "port" contexts where half of the 4x traffic coming in/out of the PC goes down each of the two ports. Skew all of the 4x traffic between just two specific TB controllers and will start to get hiccups.

TB is oriented at aggregating older and slower protocols from external boxes to a PC. In general, it needs to be substantially faster than those protocols being transported so that the the trips to/from the PC being expands are relatively transparent to the PCI-e bus.

Given commentary by some Intel folks to some of the tech journalist folks there is no intention to move TB's top end speed till 2014 (http://www.anandtech.com/show/5405/the-first-thunderbolt-speed-bump-likely-in-2014). So PCI-e v3 coverage would need to wait till then.
Very similar issue with expanding to cover DisplayPort v1.2.

Besides can't really hook to lanes that pragmatically really aren't there. The large bulk of Intel's Core i line up that most Macs use only has 20 lanes. If 16 are soaked up by the GPU and another 1-2 by some other PCI-e v3 device then only have 2-3 v3.0 left anyway.


And if they are 3.0, the DSL3510 could be a monster capable of pushing 20 Gbps of PCIe packets, although I highly doubt this is the case. I wish we could see that first footnote.

that would be quite a feat for a pipe that is rated top end at 10Gbps to get 20 Gps through.

Also would be kind of interesting how Intel plans to crank the speed much higher and still get FCC Class B ratings on this stuff in a normal system configuration with a couple of cables hooked up. Speed bumps over 10Gbps seem likely to leverage fiber. I think they just don't want to tell mainstream folks that just yet.
 

repoman27

macrumors 6502
May 13, 2011
485
167
There is nothing here that Intel is completely removing the previous chips from the market. The MBA could possible get by with what it is using now for another year. Otherwise, if the next design revision opens up a little bit more motherboard room then can just use the cheaper Cactus Ridge. Eagle Ridge (used no in MBA) is 8mmx9mm. 12mmx12mm is only 4mm and 3mm bigger. millimeters folks. (versus 7 and 6 mm increases for Light Ridge ). That also meshes quite nicely with Apple's "use common parts across as money models as possible" approach to design.

If it is cheaper that is probably a big win for most system vendors. Cactus Ridge will likely sell in higher volume that Eagle Ridge (basically a 4C model with some internal features either turned off or binned because didn't work . That means same core R&D costs amortized over all variants and comes off t the same production line.). The difference in power ( 0.25W) isn't going to cause many to loose sleep.

I reckoned this as well, that they could just stick with Eagle Ridge, especially because there's little to indicate that the 2C Cactus Ridge controller would end up being any cheaper. Don't forget that 8x9 is 72 mm^2 which is half the size of 12x12 or 144 mm^2, and a 13% power increase for a 20% PCIe performance reduction isn't exactly going in the right direction. This table only states package, not die size, so it's hard to say if these are harvested chips or not. The identical package sizes point more towards having drop in compatibility for PC motherboard manufacturers. It's not like a single Mac model would ever be specced with two different Thunderbolt controllers.

No. Because TB would have to get a speed increase to transport PCI-e v3.0 with little to no increased latency. Limited to 2 PCI-e lanes and keeping the 10 Gbps TB channel just means that latency issues (distance plus traversing multiple TB controllers on a longer chain) are less of a problem. If try to put 10Gbps of PCI-e v2.0 traffic onto TB's 10Gbps throughput you are going to have problems. TB's protocol overhead is lightweight but it isn't zero. Neither is the additional distance propagation issues. 8Gbps ( 2x v2.0's 500MB/s ) leaves better headroom.

Thunderbolt does not require a speed increase to use PCIe 3.0 connections. Current controllers appear to have an 8 lane, 5 port PCIe 2.0 switch internally (4 lanes on one side, configurable as 4 x1, 2 x2 or 1 x4, and 4 lanes on the other as 1 x4) to feed a single PCIe to Thunderbolt protocol adapter. A PCIe 2.0 x4 connection can manage 16 Gbps of PCIe throughput, but the protocol adapter is limited on the other side by a single 10 Gbps Thunderbolt channel. Thunderbolt really does provide a full 10 Gbps to the upper layers (no additional headroom required for 8b/10b encoding), but the PCIe overhead is still present (packet framing and DLLPs). Anandtech crammed 8021 Mbps of payload data through a single Thunderbolt cable on their first test of the Promise Pegasus R6, proving that the protocol adapter can exceed the limits of a PCIe x2 connection. (Why else would the first gen controllers have PCIe x4 connections?)

Besides can't really hook to lanes that pragmatically really aren't there. The large bulk of Intel's Core i line up that most Macs use only has 20 lanes. If 16 are soaked up by the GPU and another 1-2 by some other PCI-e v3 device then only have 2-3 v3.0 left anyway.

Mainstream Sandy Bridge processors provide 16 PCIe 2.0 lanes off the CPU whereas Ivy Bridge processors provide 16 lanes of PCIe 3.0. Both Cougar Point and Panther Point (6 and 7-series) chipsets provide 8 PCIe 2.0 lanes for connecting additional devices, however the 20 Gbps DMI 2.0 interconnect with the CPU can be a potential bottleneck. Apple does currently split the lanes coming off of the CPU, which are normally reserved for a discrete GPU, to connect the Thunderbolt controller on some Mac models. A PCIe 3.0 x2 connection could fully feed a single 10 Gbps Thunderbolt to PCIe protocol adapter and use 2 less PCIe lanes in the process.

that would be quite a feat for a pipe that is rated top end at 10Gbps to get 20 Gps through.

A single Thunderbolt cable and a normal 2-channel port are rated at 2 x 10 Gbps full-duplex, or 20 Gbps full-duplex. This is similar to 40GbE which is generally comprised of four 10 Gbps links aggregated. If the Cactus Ridge 4C controllers contained two PCIe to Thunderbolt protocol adapters, they could pump 20 Gbps of PCIe over a single cable. This is highly unlikely though, because it would create an odd number of ports on the internal Thunderbolt switch (5x4 versus 4x4, 2x2 or 1x1 as are present in the other configurations).
 
Last edited:

lifeguard90

macrumors 6502a
Aug 25, 2010
620
0
Chicago
sure hope iMacs come in May or early-mid june at latest. so excited,, anti reflection glass better mean glass & not some matte gimmick..
 

deconstruct60

macrumors G5
Mar 10, 2009
12,298
3,893
I reckoned this as well, that they could just stick with Eagle Ridge, especially because there's little to indicate that the 2C Cactus Ridge controller would end up being any cheaper.

Yeah. the DSL2310 (Eagle Ridge) is a lower number than DSL3310. That typically means less expensive (although sometime Intel will boost the price because "lower power" somehow demands a price premium. ) Also earlier missed that it also had model labeling consistent with the new ones.


Thunderbolt does not require a speed increase to use PCIe 3.0 connections. Current controllers appear to have an 8 lane, 5 port PCIe 2.0 switch internally

But when the v3.0 lanes interact with a v2.0 device the bandwidth goes down to v2.0 speeds. It is backwards compatible. But in the compatible mode the latency of data transfers would go up.

If have a v3.0 card in a v3.0 supplied slot inside the PC and a duplicate v3.0 card in an external device connected by the current implementation you would see different throughput and latencies. It would work, but it would work slower.

Similarly, if increased the current controllers so that there was v3.0 at the controller's switch edges but still capped at 10Gbps then a 4x controller would be grossly oversubscribed on bandwidth. 4x v2.0 is already more. 4x 3.0 would be worse.


Anandtech crammed 8021 Mbps of payload data through a single Thunderbolt cable on their first test of the Promise Pegasus R6, proving that the protocol adapter can exceed the limits of a PCIe x2 connection. (Why else would the first gen controllers have PCIe x4 connections?)

And wasn't this the same demonstration where the audio output on the daisy chained display started to hiccup? It isn't that the "full 10Gbps" is given to transporting PCI-e. It is really being about practical usage in context where there are more than just one device connected to the port and there are several protocols with varying latencies all trancoded onto PCI-e and then on top of TB. If put 2 or more devices a port and 2 or more of them need some isochronous transfers while some link is trying to push 10Gbps down the wire to a single destination it isn't going to work well.

For TB to work well with varying v3.0 devices it will likely pragmatically need a speed increase. Sure we can play a numbers game where fill the channel up the brim and claim victory, but I doubt that really works well in wide variety of network contexts with wide variety of devices. Latency is about equally an issue as bandwidth is.


Mainstream Sandy Bridge processors provide 16 PCIe 2.0 lanes off the CPU whereas Ivy Bridge processors provide 16 lanes of PCIe 3.0.

Yes. Sorry, I was thinking of the Xeon E3 variants where the other 4 present are actually turned on.


Apple does currently split the lanes coming off of the CPU, which are normally reserved for a discrete GPU, to connect the Thunderbolt controller on some Mac models.

All the more so for the reduction to 2 from 4. If already oversubscribed, then 4 is just even more oversubscribed.


A single Thunderbolt cable and a normal 2-channel port are rated at 2 x 10 Gbps full-duplex, or 20 Gbps full-duplex. This is similar to 40GbE which is generally comprised of four 10 Gbps links aggregated.

That would be nice if that second channel was unused. The pragmatic problem is that is used for DisplayPort traffic. Another reason the TB protocol overhead is so low is that the just physically separate the TB encoded traffic for PCI-e from the TB encoded traffic for DisplayPort.

Again this boils down to "what if you actually use TB to hook up the variety of stuff they say you can hook up with TB". Sure if blew away the video data traffic it would be easier to run PCI-e v3.0 data (at speed and latencies ) sooner rather than later. It is likely going to be later though because TB will get better traction if it does the wider variety of stuff it is marketed to do that just win some "need for speed" contest on a narrower set of devices.
 

repoman27

macrumors 6502
May 13, 2011
485
167
Also earlier missed that it also had model labeling consistent with the new ones.

I missed that as well.

Similarly, if increased the current controllers so that there was v3.0 at the controller's switch edges but still capped at 10Gbps then a 4x controller would be grossly oversubscribed on bandwidth. 4x v2.0 is already more. 4x 3.0 would be worse.

Or over provisioned, in the case of a host controller in a PC, but yes PCIe 3.0 x4 is definite overkill if there is still only one 10 Gbps PCIe to Thunderbolt protocol adapter.

What I was thinking was that if the Thunderbolt controller's internal PCIe switch was upgraded to handle PCIe 3.0, it would allow for greater flexibility. On the host side, you could feed a protocol adapter using either PCIe 2.0 x4 or PCIe 3.0 x2. On the device side, unless you use an additional PCIe switch, you can only add as many PCIe based controllers to your design as you have available ports on the Thunderbolt controller's built in PCIe switch. This currently allows for 4 x1, 2 x1 + 1 x2, 2 x2 and 1 x4 configurations. Not that any exist right now, but say a SATA 6Gb/s host controller with a PCIe 3.0 x1 back end could be connected, along with several other lower bandwidth PCIe based controllers, and not necessarily over-subscribe the 10 Gbps protocol adapter.

And wasn't this the same demonstration where the audio output on the daisy chained display started to hiccup? It isn't that the "full 10Gbps" is given to transporting PCI-e. It is really being about practical usage in context where there are more than just one device connected to the port and there are several protocols with varying latencies all trancoded onto PCI-e and then on top of TB. If put 2 or more devices a port and 2 or more of them need some isochronous transfers while some link is trying to push 10Gbps down the wire to a single destination it isn't going to work well.

That was the second test (the ATD one) where the audio issues showed up. And to be fair, the ATD didn't come out until well after the Pegasus, so there was no way for Promise to really test for that. Also, the Pegasus did scale back its throughput as other devices on the ATD requested bandwidth, and the only bit that had issues was a USB audio device. So essentially it boils down to a situation where the USB 2.0 controller in the ATD didn't handle an isochronous data stream very well—not much of a revelation, really.

Also, as you rightly point out, this was sort of an attempt to "fill the channel up to the brim and claim victory," more than a normal workflow situation. Promise didn't ship the Pegasus full of SF-2281 based SSD's, Anand stuck those in there specifically to push the limits.

All the more so for the reduction to 2 from 4. If already oversubscribed, then 4 is just even more oversubscribed.

That would be nice if that second channel was unused. The pragmatic problem is that is used for DisplayPort traffic. Another reason the TB protocol overhead is so low is that the just physically separate the TB encoded traffic for PCI-e from the TB encoded traffic for DisplayPort.

Intel has specifically stated that both PCIe and DP data can transported on each channel in each direction; they are not physically separated. The proof of this is that you can daisy chain two ATD's off of a single Thunderbolt port and have them both function. This requires 11.6 Gbps of DP data to be sent down a single cable in the same direction, and thus it must be present on both channels, along with whatever PCIe data is required for the other devices in the displays to work.

I think it is likely that Intel is hiding the Thunderbolt overhead from the end user, and that the raw signaling rate is perhaps in excess of 10 Gbps per pair.

All that aside, when you have a 4-channel controller that supports 2 Thunderbolt ports in a PC, it's silly to limit it to 45.923 Gbps maximum aggregate throughput when the PHY can handle 80 Gbps. A second PCIe to Thunderbolt protocol adapter (or a 2-channel adapter) would only bump the total up to 65.923 Gbps, still leaving a fair amount of headroom. To envision a scenario where this might be beneficial, consider a 27-inch iMac with a Pegasus R6 full of SSD's connected to each Thunderbolt port and a pair of 2560x1600 DP displays daisy-chained off of those.

The PCIe x2 connection on the 2C Cactus Ridge variant not only limits the number of controllers you can attach to it without having to resort to an additional PCIe switch, but it also limits the total controller throughput to 24.641 Gbps, even though the PHY can handle 40. Usually the problem with using a wider PCIe connection on a chip is finding room for the additional pins on the package. The 2C chip in this case uses the same package layout as the 4C version which has an x4 connection, so what gives? How is there any cost savings in dropping down to an x2 connection, aside from being able to use a slightly less complex PCIe switch internally? But then this would blow the theory that the 2C versions are just using harvested dies...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.