Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Read the article. Yuuueeeck! What a horrible release for AMD. Terrible performance across almost every test.

What the hell happened with TDPs? They went up as well as power consumption even though the smaller 32nm fab process is used. I bet its the higher clocks being pushed.

I am thinking this is AMD's NetBurst era.
 
Read the article. Yuuueeeck! What a horrible release for AMD. Terrible performance across almost every test.

What the hell happened with TDPs? They went up as well as power consumption even though the smaller 32nm fab process is used. I bet its the higher clocks being pushed.

I am thinking this is AMD's NetBurst era.

Like Anand said, soon there will be no competition in higher-end consumer x86 market. AMD's ace is the GPU in APUs which can make up the loss in CPU performance, but there is no GPU in Bulldozer. Unless AMD can fix their single threaded performance, it seems like Intel will be the sole player in higher-end x86 market. In server market, AMD can be more competent since multithreaded performance is often more important (and that's where AMD does strong).
 
Like Anand said, soon there will be no competition in higher-end consumer x86 market. AMD's ace is the GPU in APUs which can make up the loss in CPU performance, but there is no GPU in Bulldozer. Unless AMD can fix their single threaded performance, it seems like Intel will be the sole player in higher-end x86 market. In server market, AMD can be more competent since multithreaded performance is often more important (and that's where AMD does strong).

My same thoughts. Without any pressure from AMD to drive cost per performance, Intel will gouge us with really expensive CPUs. Market domination by Intel will make us stagnate and pay a high price.

This is bad, really bad. Honestly, I was never a fan of the more cores per buck and trying to feed consumers this is also bad. Today's computers are still being geared towards harnessing the multi-cores CPUs, but they are still not there yet and require that single thread performance Intel has had.

I wonder what the result would be if you combine an HD 6770M and a Bulldozer CPU into an APU. I mean, that would be one good combination and selling point for AMD.
 
I wonder what the result would be if you combine an HD 6770M and a Bulldozer CPU into an APU. I mean, that would be one good combination and selling point for AMD.

That's Trinity. Well, you only get four BD cores though. I wonder how that will work out, given that BD sucks at limited thread load. Clock for clock Phenom II turned out to be faster LOL
 
That's Trinity. Well, you only get four BD cores though. I wonder how that will work out, given that BD sucks at limited thread load. Clock for clock Phenom II turned out to be faster LOL
At least we have something to replace the Propus based cores on Llano. It will also be Piledriver cores as well. The selling point is going to be the fGPU though.
 
That's Trinity. Well, you only get four BD cores though. I wonder how that will work out, given that BD sucks at limited thread load. Clock for clock Phenom II turned out to be faster LOL

Why not use Phenom II cores with the smaller fab and stick the 6770M in there. I bet $10,000 that it'll definitely be faster than the 4 BD cores...

At least we have something to replace the Propus based cores on Llano. It will also be Piledriver cores as well. The selling point is going to be the fGPU though.

True, but at what cost? I was expecting AMD to come out with a half-decent CPU to keep Intel watchful when it comes to pricing. With this, its like handling the market in silver platter. A GPU will be a selling point, but at one point people will wonder what gives with the CPU.

However, one thing the Anandtech article had mentioned is that, currently Windows does not recognize AMDs thread optimizations (sending thread 1a and 1b to the same core module. So I can see some performance gains to be had. But still, I don't think it'll make much difference. Hey, I might be wrong. Let's see what AMD does about this.
 
However, one thing the Anandtech article had mentioned is that, currently Windows does not recognize AMDs thread optimizations (sending thread 1a and 1b to the same core module. So I can see some performance gains to be had. But still, I don't think it'll make much difference. Hey, I might be wrong. Let's see what AMD does about this.

Don't expect miracles: http://www.tomshardware.com/reviews/fx-8150-zambezi-bulldozer-990fx,3043-23.html

Besides, what is the point of that because W8 is a year away anyway... Next gen should be out by then.
 
However, one thing the Anandtech article had mentioned is that, currently Windows does not recognize AMDs thread optimizations (sending thread 1a and 1b to the same core module. So I can see some performance gains to be had. But still, I don't think it'll make much difference. Hey, I might be wrong. Let's see what AMD does about this.

Don't expect miracles: http://www.tomshardware.com/reviews/fx-8150-zambezi-bulldozer-990fx,3043-23.html

Besides, what is the point of that because W8 is a year away anyway... Next gen should be out by then.
There are a few reviewer that did take Bulldozer on a spin with the developer preview of Windows 8. There are a few gains but nothing shattering. Piledriver based cores should be out alongside Windows 8 and we will have Trinity too.
 
Don't expect miracles: http://www.tomshardware.com/reviews/fx-8150-zambezi-bulldozer-990fx,3043-23.html

Besides, what is the point of that because W8 is a year away anyway... Next gen should be out by then.

Ohhh... well, then this is definitely on my list as AMD's NetBurst era. What a shame.

There are a few reviewer that did take Bulldozer on a spin with the developer preview of Windows 8. There are a few gains but nothing shattering. Piledriver based cores should be out alongside Windows 8 and we will have Trinity too.

Hopefully, some good comes out of that.
 
What bothers me is the lack of traction that AM3+ had before Bulldozer was even available. Tom's Hardware even goes as far as to state that AM3+ is required to even run Bulldozer. I remember some vendors offering BIOS updates for Bulldozer support on AM3 motherboards but that appears to be locked out by AMD as well. AMD expects you to have or purchase an AM3+ motherboard using an AMD 900 Series chipset. There was little motivation before (forward compatibility expectations and waiting for Bulldozer) and even now there is barely any more.

I can understand people giving up and just going with LGA 1155 now.
 
Why bother with a limited PCIe bandwidth limited socket?

Depends on one's needs. I knew I have no need for the PCIe lanes, hence I went with i5-2500K a few weeks ago. It's $300 for the CPU anyway and the X79 boards are most likely more expensive as well.
 
The benefits of dual x16 GPUs are minimal and all the bandwidth benefits of the Patsburg PCH are no longer present. It is piped over DMI 2.0 just like the Intel 6 series.
Not sure I'm following you; are you referring to gaming?
 
On a single socket system, greater I/O bandwidth is a somewhat exotic requirement.
I can think of a few reasons it's needed, such as GPGPU applications on a workstation (better performance than a DP or MP based system), or even building a cluster of these.

There's also advantages to this approach in the server market as well.

Take a look at the following article (I think you'll find it interesting).
 
The benefits of dual x16 GPUs are minimal and all the bandwidth benefits of the Patsburg PCH are no longer present. It is piped over DMI 2.0 just like the Intel 6 series.

Yes, I was aware of the new DMI thing. However, apparently, its up to 2.5GB/s with DMI 2.0. Still, the less PCIe lanes means extra features will not be available. Example, USB 3.0 currently needs PCIe lanes since their is no native support for it.

So we have 4x PCIe 3.0 and x16 PCIe 2.0, which is not much. This new X79 will be bandwidth starved compared to our trusted X58.
 
I can think of a few reasons it's needed, such as GPGPU applications on a workstation (better performance than a DP or MP based system), or even building a cluster of these.

There's also advantages to this approach in the server market as well.

Take a look at the following article (I think you'll find it interesting).
Has anyone benchmarked GPGPU applications on varying lanes of PCIe bandwidth?

Yes, I was aware of the new DMI thing. However, apparently, its up to 2.5GB/s with DMI 2.0. Still, the less PCIe lanes means extra features will not be available. Example, USB 3.0 currently needs PCIe lanes since their is no native support for it.

So we have 4x PCIe 3.0 and x16 PCIe 2.0, which is not much. This new X79 will be bandwidth starved compared to our trusted X58.
LGA 2011 is currently limited to 40 PCI-Express 2.0 lanes. That is an improvement over LGA1366/X58 but it is still not PCI-Express 3.0.
 
Has anyone benchmarked GPGPU applications on varying lanes of PCIe bandwidth?
I didn't find it in a quick search, but it could be out there (ran GPGPU search, and on Tesla 2075, not by a particular application/suite).

But it should follow SLI/Crossfire scaling, so there is relevant information out there (biggest problem is lane count available to designers; past 36 lanes on LGA1366, they need an nF200 chip). Example article.

And what this information shows, is though there is a performance loss, it's not that much (less than 5% between 8x and 16x lane slots for the same card and benchmark according to the article from Tom's Hardware).

LGA 2011 is currently limited to 40 PCI-Express 2.0 lanes. That is an improvement over LGA1366/X58 but it is still not PCI-Express 3.0.
I expect that 4x of those lanes will be reserved for QPI communications, as is the case with LGA1366/X58, but hasn't been clearly stated so far.

As per PCIe 3.0, that may come after the LGA2011 parts release, due to the lack of suitable components/devices to test with (PCISIG testing specs aren't even finalized <last I checked a week or so ago anyways...>).

The real advantage IMO however, is that it's possible to double the lane count in a DP system (80 lanes total). Assuming there is a reserve for QPI, then 72 lanes will remain for slots (could actually get 6 * 16x slots + 2* 4x lane slots for a GPGPU beast from Hades workhorse). :eek: :D
 
I didn't find it in a quick search, but it could be out there (ran GPGPU search, and on Tesla 2075, not by a particular application/suite).

But it should follow SLI/Crossfire scaling, so there is relevant information out there (biggest problem is lane count available to designers; past 36 lanes on LGA1366, they need an nF200 chip). Example article.

And what this information shows, is though there is a performance loss, it's not that much (less than 5% between 8x and 16x lane slots for the same card and benchmark according to the article from Tom's Hardware).


I expect that 4x of those lanes will be reserved for QPI communications, as is the case with LGA1366/X58, but hasn't been clearly stated so far.

As per PCIe 3.0, that may come after the LGA2011 parts release, due to the lack of suitable components/devices to test with (PCISIG testing specs aren't even finalized <last I checked a week or so ago anyways...>).

The real advantage IMO however, is that it's possible to double the lane count in a DP system (80 lanes total). Assuming there is a reserve for QPI, then 72 lanes will remain for slots (could actually get 6 * 16x slots + 2* 4x lane slots for a GPGPU beast from Hades workhorse). :eek: :D
We have all seen what Tom's Hardware and HardOCP have turned out in testing flagship video cards in games in bandwidth limited situations. I was just curious about GPGPU applications instead of games.

What really makes the new platform fun is the PCIe controller is on the CPU so multi-socket systems get really fun as you have mentioned.
 
I was just curious about GPGPU applications instead of games.
I realized that, and I wish I was able to find that sort of testing (ideally a Tesla 2075 tested in x4, x8, and x16 electrical slots on the same system, running the same benchmark).

But it's not all that different when you look at the fundamentals (instructions + data passed to the GPU, GPU handles the processing, then returns the output). The main difference is the output is returned over the PCIe bus rather than over a graphics port such as DVI. Since the output data tends to be small (i.e. floating point value for each variable), it doesn't consume a lot of additional bandwidth on the PCIe bus compared to sending graphics output to a monitor.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.