AMD v Nvidia v Apple gpu grudge match or what?

Discussion in 'MacBook Pro' started by hopefulhandle, May 23, 2015.

  1. hopefulhandle, May 23, 2015
    Last edited: May 23, 2015

    hopefulhandle macrumors newbie

    Joined:
    Feb 15, 2015
    Location:
    Northern California
    #1
    I'm not nearly as smart as many of the regulars and legacy people up here whom have way more invested in this; but as this site is called macrumors, so lets address the elephant in the room:

    Does Apple have a grudge against Nvidia because of current lawsuits, and how do you believe that this may have effected the decision to put an AMD chip into the newest rMBP 15?

    This is not a troll. This is an honest question.

    Let the rumors begin.
    :apple::apple::apple::apple::apple::apple::apple::apple::apple::apple::apple::apple::apple::apple::apple::apple:
     
  2. Traverse macrumors 603

    Traverse

    Joined:
    Mar 11, 2013
    Location:
    Here
    #2
    I think it goes by the better power-to-price ratio. AMD probably offered the best price on an acceptable dGPU so Apple took it.

    I am little surprised to see them using AMD again after the "Radeongate" with the 2011 MBPs, but then again AMD is a well known brand with experience. Plus, I still am unsure of whether the problem was AMD's fault or Apple's poorly designed logic board.

    And unless I've just missed it, I don't think there are any major lawsuits between Apple and Nvidia.
     
  3. hopefulhandle thread starter macrumors newbie

    Joined:
    Feb 15, 2015
    Location:
    Northern California
    #3
    I have witnessed many; too many depositions in Palo Alto, Menlo Park, Mountain View, and Sunnyvale[I'm not a lawyer] to think anything about this other than it's a grudge play.
    :)
    Does anyone else get that?
     
  4. brand macrumors 601

    brand

    Joined:
    Oct 3, 2006
    Location:
    127.0.0.1
    #4
    Did it ever occur to you that there may be other factors that you may not be aware of privy to that drive this? Cost, performance, power usage, architecture, and drivers are just a few off of the top of my head.
     
  5. hopefulhandle thread starter macrumors newbie

    Joined:
    Feb 15, 2015
    Location:
    Northern California
    #5
    Yes. It did. But there are hundreds of engineers that test these things and deem them appropriate for these laptop systems utilizing whatever state of the art modeling they have.

    The reason I am asking if Apple is begrudging Nvidia for discrete processing chips and favoring AMD for said chips has nothing to do with specs.

    I simply want to know if Apple harbors a grudge against NVIDIA.

    And if so, is the result of that grudge the use of AMD Chips in the rMBP 2015 model?

    I suppose this sounds trollish but I think that many people here could posit some good thoughts about this and so I ask you if that's alright to leave this here.
     
  6. Quu, May 24, 2015
    Last edited: May 24, 2015

    Quu macrumors 68020

    Quu

    Joined:
    Apr 2, 2007
    #6
    I'm a PC enthusiast. I use Apple Laptops but PC is my hobby. Building insane computers, custom watercooled up the wazoo. I've been building PC's for over a decade, often 2-3 systems a year.

    This has also meant that I've been following all the information. I know about all the architectures, the changes each generation brings, all the nitty-gritty features companies have added to their products and what years they added them.

    I'm saying all this because I feel that I'm very well informed about this subject. I've owned probably 25 NVIDIA graphics cards over the past decade from every architecture and process node they've ever used.

    The reason I think Apple has gone AMD on their entire line up is due to three things. OpenCL, Cost and Flexibility.

    1. OpenCL, Apple authored OpenCL. It is very much their baby and they designed it because they didn't want graphics hardware vendors to create a proprietary compute framework which locked them in. AMD at the start had STREAM and NVIDIA made CUDA.

    As we all know, NVIDIA is still pushing CUDA and it has become incredibly popular being used not only in games to drive features like PhysX but also in 3rd party software like Adobe Photoshop.

    If Apple keeps putting NVIDIA cards in Macs it will tell developers they need to optimise their software for CUDA more than they need to optimise for OpenCL. Imagine if the 2013 redesigned Mac Pro had gone with NVIDIA?

    Also some may say, but NVIDIA runs OpenCL what's the big deal? Well OpenCL on NVIDIA hardware is treated like a second class citizen, it runs poorly. So poorly in-fact that the Iris Pro on the 15" MacBook Pro can run OpenCL faster than the 750m and likely faster than the 950m. Which makes no sense in hardware terms as both those cards excell at compute tasks that do the exact same calculations when they are performed through CUDA. This is outdated information, the Maxwell chips (900 Series and Titan X) all excel at OpenCL.

    AMD abandoned STREAM and went with OpenCL because they saw it was an open standard just like supporting OpenGL or DirectX it would be vendor agnostic and hopefully receive greater adoption.

    So for Apple, going with AMD is a choice that makes sure they don't end up in a future where they have to only use NVIDIA creating a monopoly due to CUDA being the only graphics compute framework anyone is using.

    2. Cost and flexibility. The D300, D500 and D700 in the Mac Pro are not real FireGL cards. They aren't really pro cards at all.

    Lets take the D700 the top end card for example. It is a rebranded HD 7970. Now the reason I say this isn't just because the die is exactly the same as you'll find this even on quadro's and GeForces and Core i7's and XEON's. It's called Binning and everyone does it. The reason I say it is because it doesn't have ECC memory like FireGL cards have, it doesn't have their enormous VRAM allotments and you can run a HD 7970 in a Hackintosh and it gets detected as a D700 and works as such.

    But forgetting the D700 and just looking at the range. Have you seen any PC's release with the D300/500/700? Nope. Have you seen AMD selling these cards to the general public? Nope. The only way to get these cards is in the Mac Pro from Apple.

    Isn't that a little strange? - Well this is AMD's way of doing things now. When the XBOX ONE and PS4 were being built Microsoft and Sony courted both AMD and NVIDIA. They wanted both companies to compete against each other to supply the GPU for the console.

    NVIDIA would not bring their price low enough and refused to make a custom chip for either console. AMD however built a custom chip for both consoles. We know it's custom because the XBOX one has 64MB of eSRAM on the CPU which doesn't come with any AMD APU or CPU sold to the general public. It's unique to the XBOX ONE. And the PS4 uses a chip that is similar to some of the APU's that AMD sell but different in the graphics core count and version to their mainstream stuff.

    Now the reason I'm mentioning all this is because it illustrates AMD is more flexible. They are willing to rebrand card names, create entirely new cards under the FireGL moniker just for Apple and design unique chips for high-volume, low-margin consoles.

    That M370X in the new MacBook Pro has never been seen before, period. We know it's a rebrand, a camp verde architecture GCN 1.0 from 2012. Apple got an affordable old processor with a 2015 name. It sounds new, but it costs less, and it ticks the OpenCL box.

    I personally don't think them going with AMD over NVIDIA has anything to do with the ARM chip designer lawsuits. I think it's all down to OpenCL, price and AMD's flexibility.
     
  7. leman macrumors 604

    Joined:
    Oct 14, 2008
    #7
    I don't know whether there is any grudge, but Occam's razor would suggest that negotiations between Nvidia and Apple broke down at some point. We know that OS X included (and maybe still does include) drivers for Nvidia Maxwell GPUs. One can only speculate what happened in the negotiation process. I would guess that AMD offered a much better price and/or Nvidia was not able to deliver the requested quantities.
     
  8. yjchua95 macrumors 604

    Joined:
    Apr 23, 2011
    Location:
    GVA, KUL, MEL (current), ZQN
    #8
    The D700 is actually a W9000, not a HD7970.

    http://architosh.com/2013/10/the-mac-pro-so-whats-a-d300-d500-and-d700-anyway-we-have-answers/
     
  9. Quu, May 24, 2015
    Last edited: May 24, 2015

    Quu macrumors 68020

    Quu

    Joined:
    Apr 2, 2007
    #9
    We know that's not accurate as the W9000 has ECC memory. If it was a W9000 FireGL card, Apple would have just said it was a W9000 and there would be no need to rebrand it to a D700. All indications are that it is a HD7970 rebranded as a D700.

    Keep in mind the W9000 itself is also a rebranded HD 7970 but it has ECC memory which is in mine and many peoples opinion the defining hardware change necessary to call these workstation cards that demand $3,000 price tags.

    Trust me if these were real W9000's Apple would have simply called them that. It would have been a huge blast of publicity to get $6,000 worth of GPU's for half their cost in the Mac Pro.

    You can check out the comparison of the W9000 and HD 7970 here: http://www.game-debate.com/gpu/inde...o-w9000-vs-radeon-hd-7970-oc-gigabyte-edition

    The Gigabyte card is slightly overclocked out of the factory. By default both cards are 975MHz clock speed and both share the same Tahiti XT architecture (GCN 1.0)

    By the way here is a picture of the D700's from a Mac Pro in Windows under GPUz

    [​IMG]

    http://www.anandtech.com/show/7603/mac-pro-review-late-2013/9

    This is from anandtech's review, ECC is disabled which is why GPUz thinks it's looking at the HD 7970 and not a W9000. Also from the same review, FireGL cards always get Crossfire Pro, for pro use. However in Windows these D700's only offer Crossfire X, which is the consumer version of Crossfire that ships on the HD 7970's.
     
  10. leman macrumors 604

    Joined:
    Oct 14, 2008
    #10
    God, no. Iris Pro is faster at some compute tasks because its is much more efficient at random memory access and resource management! Its actually a very decent GPU, its main problem is lack of VRAM bandwidth of dedicated cards. Pair it with GDDR5 and give it a bit more data samplers, and it will outperform 750m in every way. Furthermore, the Nvidia Kepler GPUs (750m) have been widely criticised for their low compute performance. In contrast, maxwell GPUs (e.g. 850m/950m) are much better at efficient utilising computational resources, so they get massive speedups over Kepler. Just look at the benchmarks. The Maxwell GPUs wipe the floor with both Intel and AMD.

    For some reason people still think that Nvidia GPUs are bad in OpenCL. This has likely to do with the Kepler GPUs, which were notoriously bad in compute tasks (but they were bad regardless if you use CUDA or OpenCL). This is not true anymore with the Maxwell architecture, which excels at compute tasks. And while OpenCL might be a second rate citizen on Nvidia platforms, that doesn't matter at all. There is no difference in performance between OpenCL and CUDA on Nvidia GPUs anymore, unless you are doing some specialised stuff where the API might differ.

    In conclusion, performance and OpenCL have nothing to do with the inclusion of Cape Verde in the rMBP. No matter how you look at it, Nvidia's solutions are better. So there have to be other reasons.
     
  11. Quu macrumors 68020

    Quu

    Joined:
    Apr 2, 2007
    #11
    You're right. The Maxwell GPU's 970, 980, 960 etc all excel at OpenCL and actually are the fastest OpenCL cards available right now apart from the Titan X which is the absolute fastest and also an NVIDIA card. I was completely wrong and I've edited my post above to reflect that.

    It would appear NVIDIA cleaned up their act when it comes to OpenCL. But I would still say Apple doesn't want to give CUDA any leg ups after-all they do want developers to use OpenCL as AMD cannot run CUDA at all so any software that is a CUDA exclusive is locked out from AMD owners.

    If anyone is curious about the NVIDIA OpenCL performance gains with Maxwell take a look at this:

    [​IMG]
     
  12. dusk007, May 24, 2015
    Last edited: May 24, 2015

    dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #12
    My conspiracy is that Apple has a Skylake Intel only redesign in the works and they don't want a GPU against which that Skylake GPU looks like a downgrade. So they put in some old mediocre GPU and release some sort of upgrade that is little more than an intermediate update.
    If they put in even a 850M Maxwell which Intel won't beat, they get confused customers who don't understand why the new gen is slower than the old. But Intel can probably beat or match this M370X rebrand.

    It could be Nvidia also just wants to switch to AMD to tell Nvidia they aren't afraid to and not dependent on Nvidia. Possibly Nvidia know what they have with Maxwell and won't budge with any discounts which does not sit well with Apple. So they got a short update cycle that does not hurt to much with AMD, to possibly get a better deal on the next batch.

    I don't think that will work though, therefore I like my first theory better. Otherwise they upgrading to an 850M a long time ago should have made sense. Why did they wait so long with this update at all, to know only deliver such a halfhearted one.

    I don't think their is a serious grudge with Nvidia. I think for most of the Macs it comes down to money and discounts. Apple is not dependent on the best specs like the Windows competition. Their customers buy even with subpar hardware. So they pick the GPUs from AMD on which they get a far better deal than Nvidia would be willing to offer. Why should Nvidia feel the need to offer a rich company such as Apple a good deal when they command nearly the entire rest of the market and offer the best product. A company such as Acer might pass on any savings to customers so Nvidia sells more volume if they offer discounts, but with Apple that wouldn't be the case, so there really is no reason to let Apple not pay the standard price.
     
  13. magbarn macrumors 68000

    Joined:
    Oct 25, 2008
    #13
    as they say follow the money trail. This hoary chip could've been put in the original rmbp 3 years ago so the only reasonable thing I can think of is amd got a huge stockpile of these things (check out amd's horrible quarterly reports to see how much unsold inventory is hurting them!) and sold to apple for half of what nvidia is charging for their superior maxwell chips.
     
  14. hopefulhandle thread starter macrumors newbie

    Joined:
    Feb 15, 2015
    Location:
    Northern California
    #14
    I would like two of what you had. And thank you everyone!
     
  15. leman macrumors 604

    Joined:
    Oct 14, 2008
    #15
    Haha, dusk, you just reached a new level as a conspiracy theorist :)
     
  16. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #16
    We will know in about half a year (if skylake-h shows up as that one supposed roadmap leak says) what is what. I don't think it is so unlikely that they drop the dGPU. Before the 750M model showed up quite a few in these forums speculated that they would drop the dGPU and embraced Iris Pro. They ended up using both which nobody even considered back then.
     
  17. thundersteele macrumors 68030

    Joined:
    Oct 19, 2011
    Location:
    Switzerland
    #17
    Apple and Samsung are involved in a ton of lawsuits against each other, still Apple is happy to buy their mobile processors, cameras, screens and SSDs from Samsung, and Samsung is happy to provide them.

    I don't think there is room for holding a grudge in this business. It is much more likely that AMD offered a better GPU package to Apple - all Macs with discrete GPUs are now running AMD. Nvidia's GPUs are currently priced higher (check e.g. Alienware 15'', all Nvidia options are more expensive), and this would force Apple to change their price points, which they don't want to do.


    Although I like dusk's theory as well: Putting a bad enough discrete GPU now, so that a future integrated GPU only MBP can still be competitive.
     

Share This Page