Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I believe they used Xeon to address the amount of memory lanes.

The very high end Core i7 x9xx and workstation Xeon implementations share the same basic micro-architecture implementation.

The fallacy is that the Core i7 variant comes any quicker ( a couple months some what but it is fundalmentally on same schedule because based on exactly the same core implementation. ). The fallacy is preptrated by confusing/muddling the different Core i7 implementations. Not all of them are the same really the same product line. It is marketing grouping name far more than a product grouping name.


At this point the Xeon workstation line of development is decoupled in the related factor of x86 core count and memory controllers. More cores typically require more paths to memory (for a given fixed memory speed and relative much higher x86 core speed ).

The desktop oriented implementation evolves "faster" but comes couple to (some of these are named Core i7 typically not in the x9xx sequence):

1. transistor budget allocated to iGPU.
2. two memory controllers and hence a core count capped at 4
3. very lmited top end PCI-e v3.0 throughput: capped at 16 PCI-e lanes.
( geared toward two x8 slot implementations and another 2-3 slots posing as bandwidth be really highly oversubscribed IOHub/Southbridge lanes. )

In contrast the Workstation single package implementation (and somewhat confusing Core i7 that uses an entirely different micro-architecture implementation. )

1. So far, zero budget on iGPU cores. ( when transistor budget bigger that is likely to change)
2. formerly (previous tick/tock generation three and now ( v1 and v2 Sandy-Bridge/Ivy Bridge ) four memory controllers. Hence, max core count raised to 6. ( the core count is leveling off, probably to make room for future transition to iGPU and/or focus on clock cranking. )
3. Substantially larger PCI-e lane budget. 40 lanes, enough for two x16 and two x4 slots without smoke and mirrors bandwidth allocation.

ECC memory is more useful for folks who are going to deploy double digit GB of memory. More memory have the higher likelihood incur error. Also folks who are handling valuable data typically like to know when that data is screwed up. (streams of video data have about zero value at the individual bit level. )

Most of the claims that a Core i7 x7xxk are the results of overclocking or just outright Apples to Oranges comparisons. There is slim to no possiblilty of Apple going that way of selling boxes intended to be "tricked out" by the customers. Scalable software and more cores smoke the mainstream desktop implementations if don't go the modified/tweaker route. That is more about control than CPU performance.
 
I'm sorry, floating point math on a regular CPU is a completely different matter.

No they aren't. You initial assertion was

"But I don't think Xeon will ever see an integrated GPU, because either they sit in a server and then graphics don't matter, or they sit in a workstation and if that workstation is used for graphics,..."

So the topic is about CPU design. Now you want to run away because your assertion is fundalmentally flawed as I outlined on both counts. First, servers do have graphics embeded. Second, workstation market customers do care about the float computational performance in their CPUs. The misdirection you keep trying to push is that GPU are only useful for graphics. That is a deeply dated and obsolete viewpoint mired in the past; not the future.


So what? The reason it's not pragmatic if you have a graphics card is that you would likely want to use your graphics card, not the integrated graphics of either the CPU or separate chip.

There is nothing stopping the user from using more than one GPU. In fact that is probably going to be an increasing trend over the next couple of years in the workstation is that two or more GPUs become much more common. One for graphics and one for computations. (e.g., Nividia Maximus http://www.nvidia.com/object/maximus.html ).

Intel will try to compete to get at least one of those. Your disconnect is that somehow they have to get both to win. Standards like OpenCL means the computations just have to get done. There isn't a proprietary lock-in that necessarily forces those into specific matched pairs.

One of the arguments against putting in a iGPU was that if use a discrete card GPU then the iGPU is useless. It is not useless if it is also available as a GPGPU unit. There is software that doesn't take advantaged of it. But there is software that doesn't take advantage of SSE. That doesn't mean it shouldn't be included on the CPU package. Again focusing on the past to guide the future is deeply flawed.
 
No they aren't. You initial assertion was

"But I don't think Xeon will ever see an integrated GPU, because either they sit in a server and then graphics don't matter, or they sit in a workstation and if that workstation is used for graphics,..."

So the topic is about CPU design. Now you want to run away because your assertion is fundalmentally flawed as I outlined on both counts. First, servers do have graphics embeded. Second, workstation market customers do care about the float computational performance in their CPUs. The misdirection you keep trying to push is that GPU are only useful for graphics. That is a deeply dated and obsolete viewpoint mired in the past; not the future.

First of all, servers where mentioned because Xeon came up. Embedded graphics on a server is used for graphics, not general purpose computations.

Secondly, I have never said that workstation market does not care about floating point computations. I said that floating point capabilities of CPU is a different matter. If you have ever actually used OpenCL or similar you know that it's quite different from just using floating point numbers in a CPU. It's a different use case, it's not something that cat be exchanged freely without effort.

There is nothing stopping the user from using more than one GPU.

The issue as pointed out by both Arstechnica and Annandtech is not that you can not use multiple GPUs, it's that the only current solution, used by Asus re-routed the output of the grahics card with a cable back to the motherboard, something Apple is likely not going to do.

If you look at the block diagram from intel, it shows two scenarios, one where Displayport is taken from the PCH, the other from a discrete GPU. According to Arstechnica, no current GPUs can re-route their screen output back to PCIe so that it can show up in the Thunderbolt socket.

I'm fully aware that there may be other solutions that are not covered, but then they are not publicly known at this point. Which is why anyone making assertive statements about this is basically talking BS.
 
iThink 99 percent of the Xeon-based computers worldwide, do not need the Thunderbolt-option.

The E3-1200 series seem to be relatively unknown, but, they are perfect for a lot of professional applications. They support ECC memory, have fast cores, have reasonably low power requirements, and support the embedded graphics (HD4000) in some models, support full VM and other pro features, too. They only thing they don't have is that they are limited to single-chip systems (no QPI) and limited to 4 cores. But, they are perfect for a lot of professional applications and would be perfect for a lower-cost Mac Pro model.
 
new firewire?

Is it just me, or does thunderbolt feel like the new firewire?

Faster! Better! You need it!

But the industry - and therefore consumers - are cool with whatever iteration of USB.
 
Is it just me, or does thunderbolt feel like the new firewire?

Faster! Better! You need it!

But the industry - and therefore consumers - are cool with whatever iteration of USB.

So let's stick with the lowest common denominator the consumer market dictates is enough? Apple cares about the pros that's why.

Btw, it's also a display port. Thus if you do not need the fast data transfer, just use it to connect your monitor and be happy that the option is there for others, or you, should you need it. Beyond speed it also offers PCIe expansion in computers without PCIe slots, it's not really there to compete with USB.

Nothing wrong with Firewire btw.
 
Under the same desk sits a Mac Mini Server with a LaCie 5big 10TB DAS plugged into its Thunderbolt port.[..] I see anywhere from 325 to 475 MBps, read and write, on the 5big. Not Mbps; MBps.
Anybody tested what numbers you'd get with usb3 (4big has one hdd less...)?
10% less maybe? How critical this (maybe) 10% is?
 
Anybody tested what numbers you'd get with usb3 (4big has one hdd less...)?
10% less maybe? How critical this (maybe) 10% is?

It doesn't touch the 5big in RAID-0 which is > 700 MBps. Not even close. It's an order of magnitude faster going from 4big to 5big and USB 3.0 to Thunderbolt, respectively.

LaCie publishes "Up to 245MB/s" for the 4big. It ships in RAID-0, but you can setup RAID-1 or RAID-5. I can guarantee you they're not going to publish RAID-5 numbers as their maximum performance mode.

The 5big also ships in RAID-0, but you have to depend on OS X for striping and mirroring (or turn to one of the ZFS solutions like I've done if you want something more.)

In RAID-5, the 4big would be no better than one drive's worth of performance in the write scenario. You'd think the same would be true for the 5big in RAIDZ1, but it isn't. Setting the ashift value to 12, since the device has 4K sector drives, nets a 50-75% increase in performance over RAIDZ1 with the default ashift value. That's where I see upwards of 325 MBps in writes. It was much better in RAID-0 (500 to 600 MBps) but I had the constant fear of a drive failure hovering over me.

There's no chance you'll see those numbers on the 4big. I've attached a little BM test I performed after a bit of tinkering. Again, this is a 5big with RAIDZ1, not OS X RAID-0.

I've tested the 5big with AJA and a stop watch as well as the ever popular Blackmagic Speed Test (people like big fancy gauges, don't they?)

Again, it's not even close (much > 10%). Compare the price of a 10 TB 5big to a 8 or 12 TB 4big with USB 3.0 and the price is very much in line with where it should be, all things considered. The 5big even ships with a Thunderbolt cable. That makes two TB cables I've received as part of drive purchases.

You can't daisy chain USB 3.0 devices, but you can daisy chain additional 4big's via Firewire 800 (up to 3 additional devices.)

What kind of performance do you think you'd get on the 2nd, 3rd and 4th 4big? I'll tell you it's impossible that it would be more than 80 MBps.

You can daisy chain 5bigs as well, and if you're in RAID-0, you'll completely saturate the Thunderbolt bus with only 2 5big's. That test has been performed (you can Google it.)

Also, There are no "USB 3.0 hubs" that have breakout ports for Firewire, Gigabit Ethernet, Thunderbolt / MiniDP, etc.

It's just not a fair comparison.
 

Attachments

  • Writes.png
    Writes.png
    903.3 KB · Views: 96
  • AJA 16GB DiskWhack Test.png
    AJA 16GB DiskWhack Test.png
    238.1 KB · Views: 89
Last edited:
you're right, but

So let's stick with the lowest common denominator the consumer market dictates is enough? Apple cares about the pros that's why.

Btw, it's also a display port. Thus if you do not need the fast data transfer, just use it to connect your monitor and be happy that the option is there for others, or you, should you need it. Beyond speed it also offers PCIe expansion in computers without PCIe slots, it's not really there to compete with USB.

Nothing wrong with Firewire btw.

I know all that, but unless it's adopted, it's not that beneficial. FW was great, but it's gone now... that's all I was trying to say.
 
great technology


way over priced

"MacRumors" forums: Where fans (we think) of Mac's come to complain about pricing of Mac's (HELLO!) and their respective technologies and related peripherals.

I've seen it all now.
 
Last edited:
Yes… But graphics cards or dedicated computation cards like Tessla or Xeon Phi does a much better job than an integrated GPU. Integrated GPUs makes sense where graphics cards or even discrete GPUs are impossible, such as in a laptop.
This is very old fashion thinking. Integrated GPUs are now good enough for many desktop uses especially in the context of the new Intel and AMD APUs.

As for compute on these devices it is a mixed bag at the moment because Apples drivers don't even support compute on Intel integrated GPUs. It is pretty hard to knock something that isn't even supported.

How is that pragmatic if you have a graphics card?


----------

TB was never intended to be a USB replacement by Apple. Apple got exactly what they wanted out of TB and that is a high performance docking cable.

The fact of the matter is that many consumer products don't even saturate USB 2. High speed isn't always the answer.

Is it just me, or does thunderbolt feel like the new firewire?

Faster! Better! You need it!

But the industry - and therefore consumers - are cool with whatever iteration of USB.
 
This is very old fashion thinking. Integrated GPUs are now good enough for many desktop uses especially in the context of the new Intel and AMD APUs.

You're wrong. The context here is graphics workstation, not regular desktop usage.

As for compute on these devices it is a mixed bag at the moment because Apples drivers don't even support compute on Intel integrated GPUs. It is pretty hard to knock something that isn't even supported.

I know it's possible… The context again, is workstation, and someone who perhaps does scientific computations on GPU's in that workstation.

The reason that it's not pragmatic, is that if you invested in a workstation, and a graphics card because you need it. Then you surely want to be able to use it, not only an integrated lesser version.

You see, this whole discussion started with a question of the possibility of Thunderbolt in a Mac Pro.
 
Last edited:
First of all, servers where mentioned because Xeon came up. Embedded graphics on a server is used for graphics, not general purpose computations.
The statement above is out of touch with reality. In some cases the only thing the GPU is used for is computation as there is no other use for the unit.
Secondly, I have never said that workstation market does not care about floating point computations. I said that floating point capabilities of CPU is a different matter. If you have ever actually used OpenCL or similar you know that it's quite different from just using floating point numbers in a CPU. It's a different use case, it's not something that cat be exchanged freely without effort.
Freely interchanged? Maybe not but often it is just a case of moving an algorithm from the CPU to the GPU. Currently much effort has to be expended in data transfers and formatting but that is another issue. The goal though is much free interaction of the GPU with respect to computation.
The issue as pointed out by both Arstechnica and Annandtech is not that you can not use multiple GPUs, it's that the only current solution, used by Asus re-routed the output of the grahics card with a cable back to the motherboard, something Apple is likely not going to do.
Interesting so how does Apple accomplish this on their MBPs?
If you look at the block diagram from intel, it shows two scenarios, one where Displayport is taken from the PCH, the other from a discrete GPU. According to Arstechnica, no current GPUs can re-route their screen output back to PCIe so that it can show up in the Thunderbolt socket.
A block diagram used for marketing purposes is of limited value in this discussion. The real question is this how many input ports are there on the cross bar.
I'm fully aware that there may be other solutions that are not covered, but then they are not publicly known at this point. Which is why anyone making assertive statements about this is basically talking BS.
 
The statement above is out of touch with reality. In some cases the only thing the GPU is used for is computation as there is no other use for the unit.

You are out of touch with reality. I know that a GPU can be used solely for computations, but if that is the purpose of a server then it's not going to be done on an integrated GPU.

Freely interchanged? Maybe not but often it is just a case of moving an algorithm from the CPU to the GPU. Currently much effort has to be expended in data transfers and formatting but that is another issue. The goal though is much free interaction of the GPU with respect to computation.

Have you ever actually used OpenCL, CUDA or OpenMP? The issue here was a comparison with floating point computations on a CPU, it's a different matter, it's routine.

Interesting so how does Apple accomplish this on their MBPs?

The GPU on a MBP does not have it's own Displayport socket in the chassis, all it's I/O is on the board it's soldered to.

A block diagram used for marketing purposes is of limited value in this discussion. The real question is this how many input ports are there on the cross bar.

Ok, show me a different block diagram then. My point all along has been that the information is lacking.
 
It doesn't touch the 5big in RAID-0 which is > 700 MBps. Not even close. It's an order of magnitude faster going from 4big to 5big and USB 3.0 to Thunderbolt, respectively.

LaCie publishes "Up to 245MB/s" for the 4big. It ships in RAID-0, but you can setup RAID-1 or RAID-5. I can guarantee you they're not going to publish RAID-5 numbers as their maximum performance mode.[..]
It's just not a fair comparison.
Yeah, I know it's not fair, but I was thinking how much you get more speed because of TB. I'd guess not very much. In 4big the speed is limited by hdd speed and in 5big by computations of raid mode.

It would be really interesting to compare identical setups of 4-disk or 5-disk boxes with usb3 and tb. In raid-0 there might be a big difference, but in raid-5 or raid-Z there might not. Usb3 should handle 500MB/s easily. And next year (okay, 2014) usb4 even more...
 
I feel it's important to mention, in such threads...

Thunderbolt is really mostly oriented at pro users. If you don't use it, big whoop. Most people don't use a fraction of the power their computers offer. For the folks who need it, it's a Good Thing, and not "overpriced," in that Thunderbolt peripherals are actually pretty reasonably priced, compared with PCIe alternatives.

Used to be a day, people really liked the fact that Macs were a "Pro" platform. I guess now that Apple is all hip and trendy, people just want to moan about everything. And ironically, the casual users moan about Thunderbolt being there, and the "Pros" moan about the platform not being "Pro" enough.

:rolleyes:

Ah finally someone who use his brain. Sadly there are few like you on internet forums...
 
Yeah, I know it's not fair, but I was thinking how much you get more speed because of TB. I'd guess not very much. In 4big the speed is limited by hdd speed and in 5big by computations of raid mode.

It would be really interesting to compare identical setups of 4-disk or 5-disk boxes with usb3 and tb. In raid-0 there might be a big difference, but in raid-5 or raid-Z there might not. Usb3 should handle 500MB/s easily. And next year (okay, 2014) usb4 even more...

Two 5bigs in RAID-0 will push about 1200 MBps. Whatever combination of USB3 drives you put together can't get there. Perhaps when 10 Gigabit USB ships, it will inch closer.

But, for those who want it (performance) now, it's there, for those who need it now, it's there, and they don't have to wait. It really is a suitable alternative to every day Fibre Channel connections in certain scenarios.

The cost of entry is much lower than FC. And it does more display in addition to storage. Sure, USB can talk to printers and web cams, but that's low bandwidth stuff.
 
It's embarrassing that the top line professional system, the Mac Pro, doesn't have Thunderbolt while the entire line currently does. If there is a Mac Pro refresh announced this month, it better include Thunderbolt along with the hints at a return to the pro-market.

It would have been more embarrassing for Apple to rush a half-baked solution to market just to put Thunderbolt on the "Pro" desktop. They're intending to do it right. Whether or not everyone else agrees with them when they do bring it to market is another story.
 
Two 5bigs in RAID-0 will push about 1200 MBps. Whatever combination of USB3 drives you put together can't get there. Perhaps when 10 Gigabit USB ships, it will inch closer.

But, for those who want it (performance) now, it's there, for those who need it now, it's there, and they don't have to wait. It really is a suitable alternative to every day Fibre Channel connections in certain scenarios.

The cost of entry is much lower than FC. And it does more display in addition to storage. Sure, USB can talk to printers and web cams, but that's low bandwidth stuff.
That's why I wasn't talking about raid-0.
Btw, who's using FC with DAS?
Aren't this the nichest of the niche?

FC is usually used with SAN and FC is connected with switches, so it's pretty far away from what TB is and what's it used for.
Although many mac users now has to buy TB-box to put FC-nic in it. Doesn't make either protocols any cheaper...
 
That's why I wasn't talking about raid-0.
Btw, who's using FC with DAS?
Aren't this the nichest of the niche?

FC is usually used with SAN and FC is connected with switches, so it's pretty far away from what TB is and what's it used for.
Although many mac users now has to buy TB-box to put FC-nic in it. Doesn't make either protocols any cheaper...

The point I'm making is FC is unreachable for many, for the performance it has to offer. TB offers a very competitive level of performance for a single user and is within reach for prosumers / professionals.

As far as your 4big to 5big comparison, it's not really equal. 4big can do RAID-5 in the box, 5big only does RAID-0 and RAID-1. You can do RAIDZ in software with either, and I'm convinced you'd see more than 10% performance difference. But going back to what the 5big (TB edition, not the NAS) is truly meant for, it's the video pro who needs the performance it can bring in RAID-0. Which is a couple of hundred actual MB more p/s than theoretical max performance of USB 3.0.

But feel free to purchase a couple and run the tests (or fly to France and run them out of their labs if you're a journalist.)
 
The point I'm making is FC is unreachable for many, for the performance it has to offer. TB offers a very competitive level of performance for a single user and is within reach for prosumers / professionals.

As far as your 4big to 5big comparison, it's not really equal. 4big can do RAID-5 in the box, 5big only does RAID-0 and RAID-1. You can do RAIDZ in software with either, and I'm convinced you'd see more than 10% performance difference. But going back to what the 5big (TB edition, not the NAS) is truly meant for, it's the video pro who needs the performance it can bring in RAID-0. Which is a couple of hundred actual MB more p/s than theoretical max performance of USB 3.0.

But feel free to purchase a couple and run the tests (or fly to France and run them out of their labs if you're a journalist.)
Hmm, looks like I chose very bad comparison to get a glance of real world difference of TB and usb3. 4big and 5 big are so different products.

I do know that there is use of TB among some video professionals. The thing I've used to criticize is that there is only 0.1% of users who benefit from TB and usually Apple doesn't cater such niche.

If you use software raid, I'd guess that limiting factor is how much you have cpu power for software. If you have hardware raid-5, once again the limiting factor is hardware's processing power.

If you need only 5 or less hdd's in raid-0 you can just use internal hdd's of MP in desktop enviroment. In the field with laptop usb3 is fast enough for 3-disk raid0.

So what's left is need for over 5-disk raid-0 in desktop enviroment or over 4-disk raid-0 with macbook on the field. These are pretty über-niche things.
Eg. if you are a DIT in big productions, you'll use much more expensive equipnment and if you are indie-DIT with no budget, you're better off with cheaper storage than TB-connected raids.

If we really think that Apple wants to offer support for this kind of niche use, then why not anything else? Why not 17" retina? Why not matte retina? Why not to offer wider expandability like modular bays which could be used as ec-slot, cf-reader, secondary or tertiary storage or just for extra battery? Why not giving decent GPU options for hard work? Why they had to get rid of video professionals with shake & fcp?

Much more logical reason for TB to exist in even the cheapest macs is just for PR value. You can say that it has something better than the rest, even if it's probably not even used. Also there's lots of people who like to buy new tech they don't ever need.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.