Are you sure? I thought I read an article that said retina TB display won't come yet because 2880 x 5120 at 60 Hz is just over TB2's capabilities.
You are correct. 5120 x 2880, @ 60 Hz, 24 bpp would require at least 22.52 Gbit/s, which is more than a single DisplayPort 1.2 main link or Thunderbolt cable can handle. The DSL5520 Thunderbolt 2 controller has 2 DP 1.2 sink adapters though, so it could drive such a display as two 2560 x 2880 tiles (or four 2560 x 1440 tiles using MST), but that would require at least 23.2 Gbit/s and the use of two Thunderbolt ports / cables.
That's just awful. Why are prices so high still for it? I always assumed flash was fast on ios devices because of how speedy and how fast apps open on them. Do you know if there is going to be an improvement in the near future on phone/tablet storage speed?
I kind of jumped all over pgiguere1 for that comment earlier because I was afraid people would have the same reaction that you are. The NAND used in smartphones is often quite good, it's just optimized for different aspects of performance. For instance, small random read performance might be 30x better than that of the fastest HDDs, thus the quick boot times and app launching.
SSDs achieve their crazy performance through parallelism, often using 8 or more channels, not intrinsically faster NAND. That type of design is currently not really feasible for ultra mobile devices where the whole storage system is packaged in a single chip. Through Silicon Vias (TSVs) and more advanced die stacking techniques may change that in the near future though.
You're referring to latency, not performance, they're not the same thing.
You can have both hard drive and flash drive perform at maximum of 25MBps but the Flash drive get there sooner for you. Opening app quicker is usually about latency but if both apps starts transferring data, they'll be at the same speed.
An analog would be two cars, a Ford Focus and a Porsche. Let's say you put a limiter on the Porsche at 80MPH. Both cars can go up to 80MPH but you will feel the acceleration faster in Porsche than Focus, that's what the latency is about.
However in the end, both cars will be going at 80MPH with no differences between then except when you break, turn corners and speed back up. Again, that's the latency, not the performance.
The performance in the mobile device's flash chips are increasing every year, they'll get better for sure. Sandisk and Toshiba have working in this area and they plan to have something out later this year or next year that should boost the performance beyond 100MBps in mobile devices.
The question is, can those chips perform at an acceptable power and price levels.
As much as I like car analogies, I think this one has some issues. Mostly because although we take throughput in MB/s as a measure of storage performance, if we look closely at any given time interval, data is generally either flowing at close to the maximum rate allowed by the media or interface, or not at all. This is why we look at max sequential read and write speeds. The other corners of the performance equation are small random IOs, which introduce the most latency, or time spent not delivering any data, because they require the most overhead to service. Solid state will generally always beat mechanical spinning disks when it comes to small random IOPS, and can leverage parallelism much more readily to boot—it just costs way more per GB at this point.
Dudes, I never said Thunderbolt is PCIe. If I did, I'd say that instead I said it is based on PCI-e, as in that's its backend. As in there's a PCIe SSD, PCIe GFC card. They all use PCI-E as its backend. If based on PCIe is not the proper term, please let me know how to say it right. From now on I'll say TB currently depends on PCIe to carry its data through to PC, it cannot be faster than PCIe.
Thunderbold intermixes the PCI-E and DP lanes into the same controller, so that it can carry both traffic via its cable into its controller and that gets pushed into the PCI-E lanes on the motherboard.
Thunderbolt does not have a special line path to the CPU.
Here's the graphic from Intel itself:
Image
Unless Thunderbolt has direct paths to the CPU, it CANNOT go faster than the entire bandwidth of PCI-E. Thus, the point I was trying to make.
Thunderbolt is a high-speed, packet based, serial I/O interface. A 4-channel Thunderbolt controller has a PCIe 2.0 back end (a 4-lane, full-duplex serial interface operating at 5 GT/s with 8b/10b encoding), two DP sink connections, and one DP source connection (each with 4-lane, simplex main links operating at up to 5.4 Gbit/s with 8b/10b encoding). Protocol adapters in the controller deserialize the data on each interface and forward the packets via a crossbar switch to the next hop. The packets are reserialized at 10.3125 Gbit/s with 64b/66b encoding before being sent over one of the channels comprising a Thunderbolt link.
The front facing Thunderbolt channels offer considerably more bandwidth than just the PCIe back end, but once you add DisplayPort into the mix, the back end swells to a potential 50.56 Gbit/s in and 33.28 Gbit/s out (or even more if you include DP AUX channels).
In a host situation, the DP sink connections are generally provided by digital display outputs from either an integrated or discrete GPU. The diagram you provided is slightly outdated due to changes in the role of the PCH for certain Haswell platforms, but the idea is still more or less the same. In practice (thanks to Apple), the majority of Thunderbolt controllers are actually connected via PEG lanes which come directly from the CPU instead of from the PCH. So with some Haswell systems, both the PCIe and DP back ends for the Thunderbolt controller are provided by direct connections to the CPU, and the bandwidth linking the controller to the CPU is equivalent to or greater than that linking the CPU to the PCH. However, just to clarify, DP packets are never transported via PCIe.
Edit: Of course you are correct in that the PCIe bandwidth of a system is ultimately determined by the aggregate PCIe / DMI bandwidth of the CPU itself these days. In the case of recent desktop and mobile CPUs from Intel, that would be PCIe 3.0 x16 (the PEG lanes) plus DMI 2.0 x4 (which is essentially PCIe 2.0 x4), or 142 Gbit/s after accounting for encoding. For the enthusiast and workstation SKUs it would be PCIe 3.0 x40, plus DMI 2.0 x4, or 331 Gbit/s. So even Alpine Ridge will only be able to provide 22% of a current system's total PCIe bandwidth over a single cable, with just one tiny little connector...