Where? I don't see anything on Anandtech.com yet....![]()
It's there, you just need to be special to access it
Well, I'm happy that I went with i5-2500K. Gaming performance is pretty torrible.
Where? I don't see anything on Anandtech.com yet....![]()
More info.
"...current roadmaps place the launch of these chips in the second Q1 of 2012"
Isn't that Q2?
http://www.guru3d.com/news/intel-xeon-e5-sandy-bridgee-cpus-launch-schedule/
It's there, you just need to be special to access itThe piece seems to be pretty much ready. EDIT: It's up now: http://www.anandtech.com/show/4955/the-bulldozer-review-amd-fx8150-tested
Well, I'm happy that I went with i5-2500K. Gaming performance is pretty torrible.
Read the article. Yuuueeeck! What a horrible release for AMD. Terrible performance across almost every test.
What the hell happened with TDPs? They went up as well as power consumption even though the smaller 32nm fab process is used. I bet its the higher clocks being pushed.
I am thinking this is AMD's NetBurst era.
Like Anand said, soon there will be no competition in higher-end consumer x86 market. AMD's ace is the GPU in APUs which can make up the loss in CPU performance, but there is no GPU in Bulldozer. Unless AMD can fix their single threaded performance, it seems like Intel will be the sole player in higher-end x86 market. In server market, AMD can be more competent since multithreaded performance is often more important (and that's where AMD does strong).
I wonder what the result would be if you combine an HD 6770M and a Bulldozer CPU into an APU. I mean, that would be one good combination and selling point for AMD.
At least we have something to replace the Propus based cores on Llano. It will also be Piledriver cores as well. The selling point is going to be the fGPU though.That's Trinity. Well, you only get four BD cores though. I wonder how that will work out, given that BD sucks at limited thread load. Clock for clock Phenom II turned out to be faster LOL
That's Trinity. Well, you only get four BD cores though. I wonder how that will work out, given that BD sucks at limited thread load. Clock for clock Phenom II turned out to be faster LOL
At least we have something to replace the Propus based cores on Llano. It will also be Piledriver cores as well. The selling point is going to be the fGPU though.
However, one thing the Anandtech article had mentioned is that, currently Windows does not recognize AMDs thread optimizations (sending thread 1a and 1b to the same core module. So I can see some performance gains to be had. But still, I don't think it'll make much difference. Hey, I might be wrong. Let's see what AMD does about this.
However, one thing the Anandtech article had mentioned is that, currently Windows does not recognize AMDs thread optimizations (sending thread 1a and 1b to the same core module. So I can see some performance gains to be had. But still, I don't think it'll make much difference. Hey, I might be wrong. Let's see what AMD does about this.
There are a few reviewer that did take Bulldozer on a spin with the developer preview of Windows 8. There are a few gains but nothing shattering. Piledriver based cores should be out alongside Windows 8 and we will have Trinity too.Don't expect miracles: http://www.tomshardware.com/reviews/fx-8150-zambezi-bulldozer-990fx,3043-23.html
Besides, what is the point of that because W8 is a year away anyway... Next gen should be out by then.
Don't expect miracles: http://www.tomshardware.com/reviews/fx-8150-zambezi-bulldozer-990fx,3043-23.html
Besides, what is the point of that because W8 is a year away anyway... Next gen should be out by then.
There are a few reviewer that did take Bulldozer on a spin with the developer preview of Windows 8. There are a few gains but nothing shattering. Piledriver based cores should be out alongside Windows 8 and we will have Trinity too.
I can understand people giving up and just going with LGA 1155 now.
Why bother with a limited PCIe bandwidth limited socket?
Depends on one's needs. I knew I have no need for the PCIe lanes, hence I went with i5-2500K a few weeks ago. It's $300 for the CPU anyway and the X79 boards are most likely more expensive as well.
The benefits of dual x16 GPUs are minimal and all the bandwidth benefits of the Patsburg PCH are no longer present. It is piped over DMI 2.0 just like the Intel 6 series.True...
Not sure I'm following you; are you referring to gaming?The benefits of dual x16 GPUs are minimal and all the bandwidth benefits of the Patsburg PCH are no longer present. It is piped over DMI 2.0 just like the Intel 6 series.
On a single socket system, greater I/O bandwidth is a somewhat exotic requirement.Not sure I'm following you; are you referring to gaming?
I can think of a few reasons it's needed, such as GPGPU applications on a workstation (better performance than a DP or MP based system), or even building a cluster of these.On a single socket system, greater I/O bandwidth is a somewhat exotic requirement.
The benefits of dual x16 GPUs are minimal and all the bandwidth benefits of the Patsburg PCH are no longer present. It is piped over DMI 2.0 just like the Intel 6 series.
Has anyone benchmarked GPGPU applications on varying lanes of PCIe bandwidth?I can think of a few reasons it's needed, such as GPGPU applications on a workstation (better performance than a DP or MP based system), or even building a cluster of these.
There's also advantages to this approach in the server market as well.
Take a look at the following article (I think you'll find it interesting).
LGA 2011 is currently limited to 40 PCI-Express 2.0 lanes. That is an improvement over LGA1366/X58 but it is still not PCI-Express 3.0.Yes, I was aware of the new DMI thing. However, apparently, its up to 2.5GB/s with DMI 2.0. Still, the less PCIe lanes means extra features will not be available. Example, USB 3.0 currently needs PCIe lanes since their is no native support for it.
So we have 4x PCIe 3.0 and x16 PCIe 2.0, which is not much. This new X79 will be bandwidth starved compared to our trusted X58.
I didn't find it in a quick search, but it could be out there (ran GPGPU search, and on Tesla 2075, not by a particular application/suite).Has anyone benchmarked GPGPU applications on varying lanes of PCIe bandwidth?
I expect that 4x of those lanes will be reserved for QPI communications, as is the case with LGA1366/X58, but hasn't been clearly stated so far.LGA 2011 is currently limited to 40 PCI-Express 2.0 lanes. That is an improvement over LGA1366/X58 but it is still not PCI-Express 3.0.
We have all seen what Tom's Hardware and HardOCP have turned out in testing flagship video cards in games in bandwidth limited situations. I was just curious about GPGPU applications instead of games.I didn't find it in a quick search, but it could be out there (ran GPGPU search, and on Tesla 2075, not by a particular application/suite).
But it should follow SLI/Crossfire scaling, so there is relevant information out there (biggest problem is lane count available to designers; past 36 lanes on LGA1366, they need an nF200 chip). Example article.
And what this information shows, is though there is a performance loss, it's not that much (less than 5% between 8x and 16x lane slots for the same card and benchmark according to the article from Tom's Hardware).
I expect that 4x of those lanes will be reserved for QPI communications, as is the case with LGA1366/X58, but hasn't been clearly stated so far.
As per PCIe 3.0, that may come after the LGA2011 parts release, due to the lack of suitable components/devices to test with (PCISIG testing specs aren't even finalized <last I checked a week or so ago anyways...>).
The real advantage IMO however, is that it's possible to double the lane count in a DP system (80 lanes total). Assuming there is a reserve for QPI, then 72 lanes will remain for slots (could actually get 6 * 16x slots + 2* 4x lane slots for a GPGPUbeast from Hadesworkhorse).![]()
![]()
I realized that, and I wish I was able to find that sort of testing (ideally a Tesla 2075 tested in x4, x8, and x16 electrical slots on the same system, running the same benchmark).I was just curious about GPGPU applications instead of games.