Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
At this point, I think most people are better off just buying an eGPU box. I'm estimating the Radeon Pro Vega 16 to be around a $200-300 configuration option and the Vega 20 around $400-500. Apple puts a premium price on everything and these added GPU options won't be any exception. This is especially considering the Vega GPUs are new architecture with HBM2 memory.

It really sucks though how some many consumers, including myself, got screwed over with such a pre-mature refresh.
 
  • Like
Reactions: Ploki
I'm estimating the Radeon Pro Vega 16 to be around a $200-300 configuration option and the Vega 20 around $400-500.

You think so? My guess would be $50-100 for Vega 16 and up to $200 for Vega 20. I believe the main reason why these are upgrade options instead of standard simply because the availability of these chips is going to be very low.

But we will see next week anyway. Right now, its all just empty speculation.
 
I saw that Apple says Vega graphics will be available late November. Does anyone know whether they will replace existing graphics or will they solely be an upgrade option for a fee?
 
I saw that Apple says Vega graphics will be available late November. Does anyone know whether they will replace existing graphics or will they solely be an upgrade option for a fee?

Everything at the website points towards an optional upgrade. But we should see next week.
 
These chips will also be BTO options on Windows laptops. Don't think Apple can charge too much for them but that never stopped them before lol

I also think it will be around $100-200 max. For $200 you can get premium Nvidia Quadro cards on BTO machines.
 
These chips will also be BTO options on Windows laptops. Don't think Apple can charge too much for them but that never stopped them before lol

I also think it will be around $100-200 max. For $200 you can get premium Nvidia Quadro cards on BTO machines.
You will not get for 200$ Quadro P3000, which Vega Pro 20 will compete with.
 
You think so? My guess would be $50-100 for Vega 16 and up to $200 for Vega 20. I believe the main reason why these are upgrade options instead of standard simply because the availability of these chips is going to be very low.

But we will see next week anyway. Right now, its all just empty speculation.

Apple will charge the maximal it believes the market will bear simple as that, nor is it often proved to be wrong...

Q-6
 
Ugh! Why would Apple put the Vega 20 in using HBM2 and only 4 gigs of v-ram? HBM is so expensive. They should've went with the cheaper Gddr6 and upped it to 6gigs. Thermals are gonna suck, I wonder how long it can stress before it throttles...
 
Ugh! Why would Apple put the Vega 20 in using HBM2 and only 4 gigs of v-ram? HBM is so expensive. They should've went with the cheaper Gddr6 and upped it to 6gigs. Thermals are gonna suck, I wonder how long it can stress before it throttles...
So many misconceptions:

You can't expect better thermals, with GDDR6, when it will consume 6 times more power, than single memory stack, of HBM2(Single 4 GB HBM2, with 256 GB/s memory bandwidth, stack consumes 4W of power. Single CHIP of GDDR6 consumes 4W of power. And you need 6 of them, for 6 GB memory subsystem resulting in 24W of power).

HBM2 4 GB stack costs 40$. 8 GB HBM2 stack costs 150$.
 
So many misconceptions:

You can't expect better thermals, with GDDR6, when it will consume 6 times more power, than single memory stack, of HBM2(Single 4 GB HBM2, with 256 GB/s memory bandwidth, stack consumes 4W of power. Single CHIP of GDDR6 consumes 4W of power. And you need 6 of them, for 6 GB memory subsystem resulting in 24W of power).

HBM2 4 GB stack costs 40$. 8 GB HBM2 stack costs 150$.


Makes sense I guess. If you're not going to put in an 80 watt gpu, might as well use only 4 gigs of that super expensive low power HBM. Unfortunately in today's landscape more and more textures are being added in graphics, that tiny 4gb will cause future hurdles IMO.
 
Makes sense I guess. If you're not going to put in an 80 watt gpu, might as well use only 4 gigs of that super expensive low power HBM. Unfortunately in today's landscape more and more textures are being added in graphics, that tiny 4gb will cause future hurdles IMO.
I suggest reading Vega whitepaper, and learning what does High Bandwidth Cache Controller, for frame buffer.
https://radeon.com/_downloads/vega-whitepaper-11.6.17.pdf

OpenCl's Unified Memory hardware implementation for everything.
 
I suggest reading Vega whitepaper, and learning what does High Bandwidth Cache Controller, for frame buffer.
https://radeon.com/_downloads/vega-whitepaper-11.6.17.pdf

OpenCl's Unified Memory hardware implementation for everything.

I'll take a look at it. Question. Since it's the end of 2018 and Apple is still only putting 4gb cards in their laptops, what's going to happen when texture requirements require more the 4 gigs of v-ram, also if you're going to go higher than 1080p will those chips not work at all? The 1k Windows laptops are packing 1060's and 1070 with 50 to 100% more v-ram.
 
It will be really looooooooong time before you will hit framebuffer limit with 4 GB of VRAM at 1080p...

Secondly, adressing space of HBCC allows you to alleviate this limit.
 
t will be really looooooooong time before you will hit framebuffer limit with 4 GB of VRAM at 1080p

But the MacBook Pro's native resolution is 2880x1800, not 1920X1080. If you try and run demanding 3d apps in native res has the frame buffer been exceeded? I'm not trying to create argument, but from a practical standpoint this GPU still seems woefully under powered, even at 1080p compared to a desktop GPU. So people can expect for all practical purposes to run their games and other programs in 1080p, or even 720, with settings turned down. This is also going into the ray-tracing era, where nVidia so far has no competition.
 
But the MacBook Pro's native resolution is 2880x1800, not 1920X1080. If you try and run demanding 3d apps in native res has the frame buffer been exceeded? I'm not trying to create argument, but from a practical standpoint this GPU still seems woefully under powered, even at 1080p compared to a desktop GPU. So people can expect for all practical purposes to run their games and other programs in 1080p, or even 720, with settings turned down. This is also going into the ray-tracing era, where nVidia so far has no competition.
The GPU is not underpowered because of VRAM. ALUs are always the most important part of GPU, then are the architecture, Instruction Sets, and GPU features, and Physical design, and after that - VRAM and frame bufer.

No. It will be a long time before 4 GB's even for 2K resolution will be not enough.

P.S.
AMD two years ago released Radeon ProRender - realtime Ray Tracing open source engine that can run on everything, even on a potato. Nobody cared. Nvidia releases Ray Tracing tech, 2 years behind AMD - revolution!, we must buy more NVIDIA GPUs, where's my wallet?!.

Cursed market.
 
The GPU is not underpowered because of VRAM. ALUs are always the most important part of GPU, then are the architecture, Instruction Sets, and GPU features, and Physical design, and after that - VRAM and frame bufer.


Why is it when you compare an ATI/AMD card with an nVidia card on paper, where the ATI card destroys it on paper with more shaders, ALUs and higher tflops, but then you go to benchmarks and the nVidia cards beats it across the board? A water cooled Vega 64 can't be a FE 1080? Really?:rolleyes:
 
Why is it when you compare an ATI/AMD card with an nVidia card on paper, where the ATI card destroys it on paper with more shaders, ALUs and higher tflops, but then you go to benchmarks and the nVidia cards beats it across the board? A water cooled Vega 64 can't be a FE 1080? Really?:rolleyes:
Because Nvidia GPUs since Maxwell have Operand Reuse Cache, which saves bandwidth, Register File Size, and helps feed the cores, much better than without it. AMD do not have any form of Operand Cache, however this patent was submitted in 2016, and most likely will land in AMD GPUs with Navi: http://www.freepatentsonline.com/20180121386.pdf

It is Interesting implementation of idea of increasing GPU IPC, by splitting General Purpose Register File into two, smaller files, which are easier to execute, and fill more space available in ALUs. The Patent blatantly specifies: Destination Operand Cache. Current GCN lacks this completely. The patent is supposed to increase performance/mm2, performance/watt, and save memory bandwidth, and save Register File bandwidth. In result - it will also increase core clocks. But if this will be implemented in Navi - remains to be confirmed.

Vega 64 LC is between GTX 1080 and GTX 1080 Ti in performance.
 
  • Like
Reactions: darksithpro
Because Nvidia GPUs since Maxwell have Operand Reuse Cache, which saves bandwidth, Register File Size, and helps feed the cores, much better than without it. AMD do not have any form of Operand Cache, however this patent was submitted in 2016, and most likely will land in AMD GPUs with Navi: http://www.freepatentsonline.com/20180121386.pdf

It is Interesting implementation of idea of increasing GPU IPC, by splitting General Purpose Register File into two, smaller files, which are easier to execute, and fill more space available in ALUs. The Patent blatantly specifies: Destination Operand Cache. Current GCN lacks this completely. The patent is supposed to increase performance/mm2, performance/watt, and save memory bandwidth, and save Register File bandwidth. In result - it will also increase core clocks. But if this will be implemented in Navi - remains to be confirmed.

Vega 64 LC is between GTX 1080 and GTX 1080 Ti in performance.


Very useful info. Thanks! Also, I would to know about Cuda cores and Steam processors. They're essentially unified shaders, but I have read they cannot be compared apples to apples. I read on enthusiast websites that Nvidias shaders are larger, more complex, so they do more per cycle. So if the AMD card has 5,000 shaders, the nVidia card can achieve the same with a lot less shaders, is this correct? Someone did some conversion with Maxwell and said for every Cuda core it takes like 2.25 steam processors.
 
The GPU is not underpowered because of VRAM. ALUs are always the most important part of GPU, then are the architecture, Instruction Sets, and GPU features, and Physical design, and after that - VRAM and frame bufer.

No. It will be a long time before 4 GB's even for 2K resolution will be not enough.

P.S.
AMD two years ago released Radeon ProRender - realtime Ray Tracing open source engine that can run on everything, even on a potato. Nobody cared. Nvidia releases Ray Tracing tech, 2 years behind AMD - revolution!, we must buy more NVIDIA GPUs, where's my wallet?!.

Cursed market.

And some of those early RTX adopters aren't really impressed with their ridiculously expensive cards :)

Hopefully, developers will also support AMD's ProRender. It looks impressive. Too bad, it took Nvidia to look into Raytracing to get developers to get serious about it. I would also like to see Freesync being a standard for all monitors rather than Nvidia's expensive G-Sync. AMD is making some good moves the past few years with open source technologies. I'm really hoping 7nm Vega and Navi compete with Nvidia. Hopefully in the mobile market as well, since I don't think Nvidia will be able to come out with anything like their 10 series for RTX that will have decent TDP.
 
The thing is that Apple does NOT provide incremental updates like this...at least not often...they are happy to wait a year or two before updating. Certainly, they have never indicated they care much about the GPU in MBP. Generally, it implies there is something wrong they are addressing...this feeling is compounded by the low key way they "announced" it.

Definitely a sneaky way for them to announce it- they knew there would be blowback.
 
Radeon Pro 555X to 560X upgrade costs $100, I guess Vega will cost at least $400.

Just a thought...
 
Radeon Pro 555X to 560X upgrade costs $100, I guess Vega will cost at least $400.

Just a thought...
Its not going to be a cheap upgrade, and if people think its a minor price bump, think again. Apple wouldn't announce a minor spec update, nor would they be adding a new GPU if its not going to have a significant price increase.
 
  • Like
Reactions: Queen6
Very useful info. Thanks! Also, I would to know about Cuda cores and Steam processors. They're essentially unified shaders, but I have read they cannot be compared apples to apples. I read on enthusiast websites that Nvidias shaders are larger, more complex, so they do more per cycle. So if the AMD card has 5,000 shaders, the nVidia card can achieve the same with a lot less shaders, is this correct? Someone did some conversion with Maxwell and said for every Cuda core it takes like 2.25 steam processors.
No, it doesn't.

Take Radeon Pro 555X with 768 GCN cores, 128 bit memory bus of GDDR5, and compare it to similarly clocked(or - decklocked) GTX 1050 with 768 CUDA cores, 128 bit memory bus of GDDR5. Both will perform the same - because ALUs are ALUs. You can even use Intel GPU, with 768 Cores, if it would exist - it would perform exactly the same as those two GPUs.
1280 GCN core chip, with 192 GB/s of memory bandwidth GPU, will be exactly the same level of performance as 1280 CUDA core GPU with 192 GB/s if both are clocked at around 1.3 GHz.

Vega Pro 20 vs Quadro P3000.

AMD GPUs with higher core counts are not fed properly, and are unable to clock high enough. Nvidia GPUs are bigger, because Nvidia increased the pipeline length(Nvidia burned quite a lot of transistors on physical design level), so the GPUs can clock higher.

AMD to get the same level of scalability as Nvidia, has to add Operand Cache to their pipeline, which will save bandwidth, register file size, power, and allow those GPUs to clock higher. This will also allow AMD to scale beyond 4 Shader/Geometry Engines, which will allow AMD to get much higher Geometry throughput, if they will also add Primitive Shaders, properly working, as a feature of Navi.

Until GP100 chip, GV100 - it was AMD which GPUs were able to do more stuff, with each cycle, because the GPUs were wider(64 Cores/256 Register File Size vs 128 cores/256 KB register file size). Nvidia has just recently got parity with AMD on the front of "width" of the GPUs, and it hasn't really made their GPUs faster in gaming, because of very simple reason. ALUs are most important here.
And some of those early RTX adopters aren't really impressed with their ridiculously expensive cards :)

Hopefully, developers will also support AMD's ProRender. It looks impressive. Too bad, it took Nvidia to look into Raytracing to get developers to get serious about it. I would also like to see Freesync being a standard for all monitors rather than Nvidia's expensive G-Sync. AMD is making some good moves the past few years with open source technologies. I'm really hoping 7nm Vega and Navi compete with Nvidia. Hopefully in the mobile market as well, since I don't think Nvidia will be able to come out with anything like their 10 series for RTX that will have decent TDP.
Do not count on this. Want to know why, which is also apparent on this very forum?

People want AMD to be competitive with Nvidia, not because they want to buy their products. They want to buy Nvidia GPUs cheaper!

That is very reason why I genuinely hate desktop, and professional market. Pure stupidity.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.