Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Well, there is this bit in the Andandtech piece.
Apple officially leads the charge with the move to PCIe based SSDs

Those of us who have been using PCIe-based SSDs for several years laugh at this comment.

Apple fans never cease to amuse.


Keep in mind that this is a very small computer.

And how many pros have been asking for "a very small computer".

Probably not as many who have been asking for a "powerful, flexible, expandable" computer.

But Apple gives them "small" instead. The Mini Mac Pro.
 
Last edited:
Those of us who have been using PCIe-based SSDs for several years laugh at this comment.

Apple fans never cease to amuse.

It's not my comment, it's a direct quote from that Andandtech link I showed you before. They are first to include it and skip SATA out of the box.



And how many pros have been asking for "a very small computer".

Probably not as many who have been asking for a "powerful, flexible, expandable" computer.

But Apple gives them "small" instead. The Mini Mac Pro.

First of all, lots and lots of pros ask for a very small computer, but that is just a side note, the important part is this:

The link is a review of the MacBook Air, the MacBook Air SSD is what we where discussing and it's the SSD that has been tested.
 
Well, there is this bit in the Andandtech piece.



So we know that the drive is using 80% of the available bandwidth. Keep in mind that this is a very small computer.

I do not recall the numbers, but I recall comparing the quoted figures for the Air with those of the MP and some figures for PCIe card based SSDs with PCIe controllers (not to be confused with some SATA controller PCIe card SSDs).

In any event, it is good that Apple is making the transition. It will be interesting to see if PCIe based SSDs will make their way into the iMac.
 
One things for certain, The new era at Apple is very interesting and neither boring or dull. In a switch they're more interesting than the products they have yet to create.
 
So very, very true!

This is the way all IT-based tech is heading: move away from a few specialised experts, and move to the IT-savvy masses:
- You don't need to be a qualified Systems Administrator to be able to create a network with servers;
- You don't need to be the AV expert to edit a movie professionally;
- You don't need a laptop which is installed and configured by the IT department of the company which is featured by what you cannot / are not allowed to do....

...etc.

IT4All: Use it. Don't throttle others.

Good luck with that.
 
They had them at at CES. A large coil up and running. in fact they had 6 in a line daisy chained reaching 600m - I saw the demo and tried to get a vague price out of them. They said no idea yet. I suspect it will be $500+

As for not being out. Supply and demand. Machine costs would be high and probably little sales. how many people need a 100m TB cable.

Now that there are lot more TB devices coming out it'll come out.

I don't know what you saw or what the length of the spool was. Their PR promises the fibre to retail Stateside 13Q2. Currently there's nada available to the public...

Price-wise, InfiniBand QDR and FDR 100m fibre links (similar to the datarates needed for TB) retail at over $1000, so draw appropriate inferences from that. Also, there's no need to develop new fibre technology for TB rates, as IB FDR is faster and you can get 100m fibre for that (the only thing you need on the TB side is to have copper<->laser transceivers with the laser power sufficient to account for attenuation over that distance given the socket's power limitations)
 
Erm... quoting from that site: Physical Expansion slot - PCIe 2.0 x16, and straight underneath that: Electrical Bandwidth PCIe 2.0 x4 (Thunderbolt spec). If anything, you've re-iterated the point I was making...

Sure I know what you mean but you can multiplex TB2. It will need 2 TB cables and a 2 port/multilexing chassis - the total bandwidth will be more that enough for a PCIe 2.0 x16 Card. Though the latency will certainly be higher.

I do have a feeling that Apple may come up something though. As they kept saying this is a work in progress. Everyone is conjecturing about stuff and they have shown us 1 model. The final version may well be different.

The main thing I know is that 95% of the mac pro's I've use in companies... has stock memory of 8gb, only had 1 hard drive and the lowest spec graphics card in 75% of them. And when using for example After effects, they didn't even have the Multi processor enabled.
 
Sure I know what you mean but you can multiplex TB2. It will need 2 TB cables and a 2 port/multilexing chassis - the total bandwidth will be more that enough for a PCIe 2.0 x16 Card. Though the latency will certainly be higher.

First, you can't- the two ports on that enclose are to enable daisy-chaining. Secondly, even if you could multiplex the ports, the controller that multiplexes PCIe onto TB sits on only four lanes, and even if there was some magic solution to multiplex independent TB controllers into a multiple PCIe link (bearing in mind that none exists because a PCIe "connection" is more than a mere sum of lanes- read the PCIe spec or look at the pinout) the most you'd get is x8 lanes, and not x16 as you say!

I do have a feeling that Apple may come up something though. As they kept saying this is a work in progress. Everyone is conjecturing about stuff and they have shown us 1 model. The final version may well be different.

I go off information Apple made available- there's no TBv3, and the new MP is limited to 40 lanes of PCIe GEN3, and TBv2 controller is PCIe GEN2 x4- there's no way around that...

The main thing I know is that 95% of the mac pro's I've use in companies... has stock memory of 8gb, only had 1 hard drive and the lowest spec graphics card in 75% of them. And when using for example After effects, they didn't even have the Multi processor enabled.

How is this relevant to the PCIe/TB discussion? I have no idea what "they" do in After Effects, but perhaps they should "enable" "multi-processor" otherwise they've just wasted money on tech. they don't use and probably suffering from degraded performance... And, in any event, if that really is the case, then I take it as "those guys" are not planning on buying the new MP as that would be a mindless waste of money...
 
Last edited:
Also, there's no need to develop new fibre technology for TB rates

Yes, one reason is cost, the other is that it's a requirement to move towards the 100Gb/s part of the spec.

even if there was some magic solution to multiplex independent TB controllers into a multiple PCIe link (bearing in mind that none exists because a PCIe "connection" is more than a mere sum of lanes- read the PCIe spec or look at the pinout)

PCIe supports lane aggregation to increase bandwidth in a link.
 
Last edited:
Yes, one reason is cost, the other is that it's a requirement to move towards the 100Gb/s part of the spec.

Of what spec- fibre, TB, PCIe, something else? And surely if you junk the existing tech. and develop "new" tech. when the older tech. is perfectly capable of doing what you want, you're the one who's increasing costs, and not reducing them?

PCIe supports lane aggregation to increase bandwidth in a link.

Yes, if they run off the same clock (not the case if you have two TB controllers) and, in any event, I'm not aware of any "dual-homed" TB controllers, are you?
 
First, you can't- the two ports on that enclose are to enable daisy-chaining. Secondly, even if you could multiplex the ports, the controller that multiplexes PCIe onto TB sits on only four lanes, and even if there was some magic solution to multiplex independent TB controllers into a multiple PCIe link (bearing in mind that none exists because a PCIe "connection" is more than a mere sum of lanes- read the PCIe spec or look at the pinout) the most you'd get is x8 lanes, and not x16 as you say!

I go off information Apple made available- there's no TBv3, and the new MP is limited to 40 lanes of PCIe GEN3, and TBv2 controller is PCIe GEN2 x4- there's no way around that...

How is this relevant to the PCIe/TB discussion? I have no idea what "they" do in After Effects, but perhaps they should "enable" "multi-processor" otherwise they've just wasted money on tech. they don't use and probably suffering from degraded performance... And, in any event, if that really is the case, then I take it as "those guys" are not planning on buying the new MP as that would be a mindless waste of money...

Red Rocket is Gen 2. Didn't say anything about Gen3
Thunderbolt does Multiplex.
 
Of what spec- fibre, TB, PCIe, something else? And surely if you junk the existing tech. and develop "new" tech. when the older tech. is perfectly capable of doing what you want, you're the one who's increasing costs, and not reducing them?

I quoted one sentence from you, where you referred to Thunderbolt. My reply also refers to it since it's a reply to your statement.

Existing technology is not perfectly capable, optical links are expensive and not as fast as they could be. Intel's investment in silicon photonics has already been demonstrated at 100Gb/s, it's cheap, it's scalable up to Tb/s speeds it integrates in a silicone chip, so it's also smaller.

Yes, if they run off the same clock (not the case if you have two TB controllers) and, in any event, I'm not aware of any "dual-homed" TB controllers, are you?

You mentioned that it was not possible with PCIe...
 
Anything mentioning Steve Jobs is sure to keep Apple in the spotlight, a position they relish.

Consider it a priceless competitive advantage that they are certain to milk for years.
 
I quoted one sentence from you, where you referred to Thunderbolt. My reply also refers to it since it's a reply to your statement.

Existing technology is not perfectly capable, optical links are expensive and not as fast as they could be. Intel's investment in silicon photonics has already been demonstrated at 100Gb/s, it's cheap, it's scalable up to Tb/s speeds it integrates in a silicone chip, so it's also smaller.

What are you on about? We were talking about sending TB signals down the fibre, and the existing tech. is perfectly capable of that!..

You mentioned that it was not possible with PCIe...

... in the context of taking two separate devices and pretending they are "the same" for PCIe aggregation. Of course it's possible generally- that's how x4, x8, and x16 work...
 
What are you on about? We were talking about sending TB signals down the fibre, and the existing tech. is perfectly capable of that!..

You said: "Also, there's no need to develop new fibre technology for TB rates"

Yes, there is. For the current implementation, cable cost.

... in the context of taking two separate devices and pretending they are "the same" for PCIe aggregation. Of course it's possible generally- that's how x4, x8, and x16 work...

InfiniBand does enable that...
 
It is, but it is x8 or x16 for RR-X. TB is x4. TBv2 "multiplex" is of internal 2 up, and 2 down (10gb each) to 1 up, 1 down (20gb each). There is one TB<->PCIe controller which only supports x4. What is so hard to understand??.

In fact there are 3 controllers.
 
You said: "Also, there's no need to develop new fibre technology for TB rates"

Yes, there is. For the current implementation, cable cost.

And you resolve that by throwing money at the problem, whereby increasing the cost?..

InfiniBand does enable that...

No, you can IB over multiple controllers and talk IB, but not PCIe, just like 802.1ax- you can talk ethernet across various ethernet controllers sat on different PCIe links, but not the case with TB...

----------

In fact there are 3 controllers.

Where? Your link expressly states that the electrical bandwidth is capped at PCIe GEN2 x4- there is no way around that...

----------

lol.. where do you come up with this stuff?

Erm, have you looked at their site recently, and is it actually true?!. IB, for one, is significantly faster and more capable that TB...
 
Last edited:
And you resolve that by throwing money at the problem, whereby increasing the cost?..

Well no, throwing R&D money at something that will end up being cheaper to manufacture decreases the cost. It's being done any way, current optical solutions are done by hand and are very expensive. The reason that I mentioned surrounding details regarding intel's work in silicon photonics was to show that it has many additional benefits apart from cost (such as speed and size), but cost is an important factor even for enterprise customers.

They have reached beyond their R&D phase the demo was done at Open Compute Summit. So, it's something that will be beneficial and it's being done regardless of Thunderbolt.

With the advent of silicon photonics modules such as those Intel demonstrated on Thursday, however, "One hundred–gigabit becomes a very viable technology for the networking industry," Bechtolsheim said, "and it will take off as soon as this is shipping."

http://www.theregister.co.uk/2013/04/11/intel_sillicon_photonics_breakthrough/

No, you can IB over multiple controllers and talk IB, but not PCIe, just like 802.1ax, but not TB...

What if you have several InfiniBand cards, which is not uncommon.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.