Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

cube

Suspended
May 10, 2004
17,011
4,972
The Radeon Pro Duo is $999 and the MI25 has yet to show up.

"...AMD previously indicated with the Radeon Instinct MI25 announcement that they would be targeting 25 TFLOPS or better of half precision (FP16) performance on their high-end Vega parts, and the Vega Frontier Edition will be delivering on the “or better” part of that ...."
http://www.anandtech.com/show/11403/amd-unveils-the-radeon-vega-frontier-edition

I doubt AMD is going to deliver the Frontier the same or cheaper than the Pro Duo and leave no room for the MI25. The Pro is two high yield ( because have been in production for almost a year ) and normal , very mature DDR5 memory and no interposer overhead. The Frontier edition is probably a very low yield GPU ( clocked at the extremely high range) and not very volume scalable HBMv2 memory that is substantially more expensive.

The point is to be less expensive than a $4-7K priced solution from your competitor. Something priced $2k-3.5K does that. IF AMD prices it in the $2.5K range and it sags back to $1.5K over a year they'd still be making margin.

They aren't trying to drive volume with this card. They are trying to grow a development ecosystem so that there will be software written and ready for when the volume priced card(s) are released.
Radeon Instinct is not a graphics card, it is an accelerator. That is a different market and price bracket.

I don't see why Frontier cannot cost $999 when it is simpler than Duo, which compensates for the bigger chip. I think Duo has been obsoleted, multi GPU are fringe products.

Price it at $999 and an ecosystem will definitely grow. Average consumers will not buy it.
 

Zarniwoop

macrumors 65816
Aug 12, 2009
1,036
759
West coast, Finland
hUMA works on Intel.

I've been really trying to dig information about this. All I can find is this: Intel has a closed UMA system for their iGPU. And that's it. For dGPU there hasn't been a way for GPU to go and read system RAM without asking the CPU to delivery the content. I know there is an API for zero-copy in Windows, but it's been utilised by iGPU's before.

If you have found something else, I would be happy to extend my knowledge on this. =)
[doublepost=1495048807][/doublepost]Ok, I have to add to my previous post that Intel indeed has opened some doors in Atom and Core M SoC's. But these are hardly good for desktop environment.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,296
3,892
I've been really trying to dig information about this. All I can find is this: Intel has a closed UMA system for their iGPU. And that's it. For dGPU there hasn't been a way for GPU to go and read system RAM without asking the CPU to delivery the content.

If the RAM is attached to the CPU package some part of that package has to deliver the data back. It may not touch an x86 core, but it has to be something in the package. But whose GPUs are you talking about.

"... With an IOMMU – which will be part of both AMD’s discrete CPUs and APUs – the chips will be able to support address-translation requests. Demers also notes that should their be a page fault, “the GPU will be happy with that – well, not necessarily happy, but it will survive that. It will wait until that page is brought in by the operating system and made local, then – bang! – it’ll keep on running.” ... "
http://insidehpc.com/2011/07/deep-inside-amds-master-plan-to-topple-intel/

There is an AMD whitepaper on GCN architecture that talks about iOMMU also. To get to the flat virtual memory space that OpenCL 2.0 (above) require you'd need some memory management hardware.
Intel has some OpenCL 2.1 certifications (with Kaby Lake ) so getting memory DMA is doable. However, hitting MMU traps to translate memory request.... I can see how that gets tagged as involving the 'cpu'. DMA isn't the root issue though.


However, I was being a bit sloppy. Going back to re-read several of the more early hUMA articles this a strict requirement of hardware implemented coherent cache enforcement then no. If the cache coherence is all implemented strictly in hardware at the bus level then the general category of GPUs hooked via PCI-e is out. To have coherent access to the memory, if the "CPU" implements snooping trigger only by x86 core memory requests then don't really have a choice but to send the request there.

Given how loose the GPU seem to be on coherence down at the L1 level I don't think the CPUs are the major hang ups here. The GPUs are as bad or worse as to non uniform coherency.

So Intel isn't in the most strict compliance with hUMA, but there are aspects and through open standards can get to were above perhaps some driver/foundation 'glue' can get to pretty much flat memory ( ... probably would need to put some 'soft' coherence code on top).

I don't think Intel is going to use CCIX (http://www.ccixconsortium.com/) but I'd be surprised if OmniPath stays a intel x86 homogenous solution long term. Then it is a matter of hooking up. But down at the desktop, strictly single socket ecosystem... .... got to get to point where there is multiple "sockets" to get them to start worrying about coherence.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,296
3,892
Radeon Instinct is not a graphics card, it is an accelerator. That is a different market and price bracket.

No, they are not. if the Frontier inferences faster and cost 45% less who is going to buy the MI25?
These two card a no where near as rigidly segregated as you are making them out to be. They aren't.
The Frontier does more and so probably will cost more. Does more and cost less. They are both in the "Professional" card bracket.


I don't see why Frontier cannot cost $999 when it is simpler than Duo, which compensates for the bigger chip. I think Duo has been obsoleted, multi GPU are fringe products.

It isn't simpler than than Duo in terms of yields. It isn't just that the chip is bigger ( so have process defect yield to get around) you also are clocked about 29% higher. So not only have to bin on defects but also on clock stability. That will push down yields. You have imposer and HBMv2 connections.

In contrast, the Duo is a smaller chip and likely on 2nd iteration of process design ( the tweaked Polaris for the RX 500 series); so defect yields are likely higher than 1st generation. Instead of clocking them to a higher clock, they leave it at the old WX 7100 rates. (so slower). Binning for slower clocks is easier so yields higher on both fronts.

Furthermore, The end user cost of the Frontier is the value it produces for the customer. If it is faster than previous generation inference case then it is worth something close to the prices charged of those products (after all those products had buyers.)

The Duo is pointed largely at substantively different workloads and/or contexts. Obsoleted 1-2 month(s) after released? Really? IF AMD thought they extremely overlapped that much and the next one was cheaper why would they released the Duo? I could see maybe if the Duo was a 6-7 month old product but it was just announced last month ( late April for May release ). The Duo probably has a lower TDP envelope which means can fit in more systems with more mainstream power supplies. It probably has a cheaper price too.
[ both of which are true relative to the previous Pro Duo that was based on Fiji.
Fiji 350W TDP 16 TFLOP peak total VRAM 8GB initial price $1,499
Polaris 250W TDP 11 TFLOP peak total VRAM 32GB initial price $999

it is not a drag race TFLOP targeted card.
]


Price it at $999 and an ecosystem will definitely grow. Average consumers will not buy it.

But will AMD grow? AMD could sell everything at cost.... I'm sure the customers would be happy while it lasted. Here is the real compute card market.

raja_koduri_-_radeon_rising_-_amd_2017_financial_analyst_day-09.png

http://www.anandtech.com/show/11403/amd-unveils-the-radeon-vega-frontier-edition/2


Frontier Edition is a compute card (that happens to have graphics output too) so it will probably be in the >$1,000 bracket. The Pro Duo isn't primarily targeting compute ( this Pro Duo walked backwards from the previous version) and it is just under the $1,000 mark. Probably in the range in the last 16TFLOP range card they had (even if changing focus from single to half precision).

The Frontier isn't the last Vega based card that AMD is going to do. That is way it is named Frontier. It is the first, "early access", bleeding edge Vega card. That isn't going to come cheap nor does it benefit AMD to sell it cheap. AMD needs to make some better margins. They can't keep borrowing money to keep the lights on.


P.S. Duo GPU cards are more more likely to evolve than disappear. Something like

Vegalike-GPU <--- Infinity Fabric ---> Vegalike-GPU <--- pci-e -->

wouldn't be very at all at some point. You can see the problem with Nvidia V100 in that they have reached limits of just how big can make the die. It is more cost effective to start to couple smaller dies together . You can say that a multiple chip module (MCM) GPU isn't a the fringe product market... but that board you are adding to a computer system is essentially a very large module. So no, not going to disappear. Just going to get smaller.
 
Last edited:

cube

Suspended
May 10, 2004
17,011
4,972
GT 1030 was quietly released at $70.

If seems in general it performs better than RX 550 with traditional APIs, while the latter can win with low level APIs.

GT 1030 has 2 outputs, while RX 550 has 3.

30W vs 50W.

But you can get the MSI RX 460 LP for less after rebate. This is faster than 750 Ti, which is faster than 1030.

 
Last edited:

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Because no link, consider this a rumor.

RX Vega Frontier Edition case shows how AMD marketing team is deceptive. They cherry picked the numbers, and cards used in their benchmarking for Deep Learning and SpecPerf to picture it in the best light.

The thing is this. 83 ms for DeepBench is still the fastest GPU on market. GV100 is still not released, and Vega FE is coming at the end of June. GP100 achieves around 120 ms in DeepBench, on latest drivers. AMD demoed it with older drivers, hence the 133 ms score.

The benefit of the GPU is that it costs ounce of what GP100 costs and what GV100 will cost. FE is rumored to sport around 1000$ price tag. Compare this to Tesla P100 - https://www.amazon.com/NVIDIA-Tesla...8&qid=1495693174&sr=8-4&keywords=Quadro+GP100 and you get the picture, why AMD considers their offering disruptive.
 

cube

Suspended
May 10, 2004
17,011
4,972
  • Like
Reactions: koyoot

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
That is a Tesla P100 and I see the HPE part at $7400, not $15000.

You can buy the Quadro GP100 for $6100.
My bad, forgot about Quadro, which with funnily enough, Frontiers Edition will actually compete.
 

Stacc

macrumors 6502a
Jun 22, 2005
888
353
The thing is this. 83 ms for DeepBench is still the fastest GPU on market. GV100 is still not released, and Vega FE is coming at the end of June. GP100 achieves around 120 ms in DeepBench, on latest drivers. AMD demoed it with older drivers, hence the 133 ms score.

Wait, you are saying marketing may exaggerate performance during a demo?!?!?! I am shocked!!!! /s

The benefit of the GPU is that it costs ounce of what GP100 costs and what GV100 will cost. FE is rumored to sport around 1000$ price tag. Compare this to Tesla P100 - https://www.amazon.com/NVIDIA-Tesla...8&qid=1495693174&sr=8-4&keywords=Quadro+GP100 and you get the picture, why AMD considers their offering disruptive.

Unless you have a specific source on the $1k price tag, I am skeptical it would go that low. AMD has stated that they are aiming to double performance per dollar compared to nvidia. For instance if the frontier edition is the only thing that can smoothly play back video on the new 8k dell monitor, than I'm sure the price tag will be much higher.
 

cube

Suspended
May 10, 2004
17,011
4,972
It seems Frontier is on track for June with the other Vega cards releasing in July.

My feeling is that there will be shortages again.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
I see that many RX cards are sold out at amazon.com, both 400 and 500 series.
Spike in prices of cryptocurrency affects sales of AMD GPUs.

Yesterday Bitcoin was 2750$. 1 month ago, around 1650$. So you can see where this is going.
 
Jul 4, 2015
4,487
2,551
Paris
Spike in prices of cryptocurrency affects sales of AMD GPUs.

Yesterday Bitcoin was 2750$. 1 month ago, around 1650$. So you can see where this is going.
Crashed by $700 today though.

AMD is the card for miners anyway. The only Nvidia card good for mining is the 1070 but costs more.
 
Jul 4, 2015
4,487
2,551
Paris
Typically for "speculative market" ;).

First rule of good investor: when there is a crash, and the prices are falling, buy more shares. ;)

Unfortunately too speculative. These are not tangible real world assets so we shouldn't use same investment strategies. This dodgy market often acts independently from real events. The time for buying the coins has gone unless you rich enough to be risky.

But it a good time for mining anything but Bitcoin. Everyone should try it while it is still possible.
 
Jul 4, 2015
4,487
2,551
Paris
Bitcoin extends losses. Down $800-900 in two days and bringing down other coins. The price is moving up and down so fast you can't even think of this as a real asset or investment. It's become a casino.
 
Last edited:
Jul 4, 2015
4,487
2,551
Paris
For mining the Vega won't show much difference until after a year when the difficulty of all the block chains is much harder than now. But by then Ethereum and other coins might not use GPU anymore.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.