Reason why apple choose AMD

Discussion in 'iMac' started by Jackotai, Apr 6, 2016.

  1. Jackotai macrumors newbie

    Joined:
    Nov 4, 2015
    #1
    I am thinking it is because AMD is ready for directx12 which almost the same thing of apple side "Metal".
    recently i have tested the ashes of the singularity which is the first fundamentally directx12 supported game with my m395x. i figure out my card gain around 15% improvement on directx12 which almost 80% more frame rate compare with GTX 970M and 10% below 980m. Would likely to see more performance gains on directx12 by driver update later. Therefore i don't think apple will shift to nvidia on the coming update.
     
  2. ninja2000 macrumors 6502

    Joined:
    Dec 16, 2010
    #2
    I am guessing it was a typo but 80% more than a 970m???

    I just fired up Ashes in DX12 on both my iMac m395x and on an Alienware 17r3 with 970m at 1080p high settings the m395x is about 22% better than the 970m
     
  3. maflynn Moderator

    maflynn

    Staff Member

    Joined:
    May 3, 2009
    Location:
    Boston
    #3
    I'm thinking it was just purely financial, 10 percent performance difference isn't that much to shift to a new GPU.
     
  4. Jackotai thread starter macrumors newbie

    Joined:
    Nov 4, 2015
    #4
    My test is based on crazy setting 1080P(while 395x got 23.8FPS and 970M got 15.xFPS), would you please test again to make sure if i am wrong? :)
     
  5. Maxx Power macrumors 6502a

    Maxx Power

    Joined:
    Apr 29, 2003
    #5
    I am pretty sure it is party due to AMD's current compute prowess (hardware and software OpenCL support). Nvidia has been pushing CUDA, instead.
     
  6. Novus John macrumors member

    Joined:
    Sep 27, 2015
    #6
    Money. AMD is in no situation to carefully pick its buyers, they're ready to do anything and everything for money at this point. Because of this I predict that the next GPUs in Apple hardware will also be from AMD.
     
  7. yellowscreen macrumors regular

    yellowscreen

    Joined:
    Nov 11, 2015
    #8
    i think its because nvidias dont support imac 5k screen resolution and they probably designed that new lcd timer controler (?) with amd. just a gues
     
  8. Laai macrumors member

    Joined:
    Apr 23, 2012
    Location:
    Germany
    #9
    Ye, I also read somewhere that current mobile nVidia GFX cards do not support 5K displays. Something with the connector they were using to drive the display that maxed out at 4k I believe. That should be probably the biggest reason why apple went with AMD this time around. 2nd one being probably higher margins.

    But hey, at least they updated the bootcamp drivers!
     
  9. maflynn Moderator

    maflynn

    Staff Member

    Joined:
    May 3, 2009
    Location:
    Boston
    #10
    Excellent point, something that I really didn't even consider.
     
  10. varian55zx macrumors 6502a

    varian55zx

    Joined:
    May 10, 2012
    Location:
    San Francisco
    #11
    hmm don't have to read anything besides the title to say 'money'.

    What else would it be? Not money? lol.
     
  11. turbineseaplane macrumors 68020

    turbineseaplane

    Joined:
    Mar 19, 2008
    #12
    Hmm...maybe they should consider using a desktop GPU for a desktop computer?

    What a concept!

    lol

    The desktop NVIDIA's absolutely do 5k (I'm looking at one using a Dell 5k w/ 980ti)
     
  12. boto macrumors 6502

    boto

    Joined:
    Jun 4, 2012
    #13
    The reason why Apple chooses AMD is because they made a good deal with them. They get discounted custom made GPUs.
     
  13. mildocjr macrumors 65816

    #14
    AMD = dirt cheap, speedy but high failure rate
    NVidia = Expensive, slower but low failure rate

    All depends on how you look at it, Apple may have gone with AMD just to keep the profit on the rMBP low due to the other components that are more expensive.

    Apple does flip flop between NVidia and AMD from time to time, perhaps now is just the time for AMD.
     
  14. koyoot macrumors 601

    koyoot

    Joined:
    Jun 5, 2012
    #15
    In DirectX12 all the bottlenecking of AMD GPUs has been lifted, and Nvidia cannot gain anything because... it wasn't bottlenecked anywhere. Scheduling of DX11 was the problem, serial matter of the API to be precise. In DX12 SIMD vs SIMD performance will be relatively equal for both vendors. But what matters here is compute performance, that reflects into games. Finally 6.1 TFLOPs GPU is as fast as 6.1 TFLOPs GPU regardless of vendor(exactly what we see with R9 390X and GTX 980 Ti(reference)) and 8.6 TFLOPs GPU is much faster that 6.7 TFLOPs GPU(Fury X vs Titan X).

    Also it is quite funny to see that R9 380X ties with GTX 970. Only thing that was making Nvidia cards better was proprietary software and the nature of API that most games used. AMD hardware is at least 2 years ahead of Nvidia hardware. What was closing the gap was software: CUDA, Iray, Drivers, GameWorks. Currently, software catched up, and there is quite a gap between last gen architectures.

    I genuinely suggest educating yourself guys, by reading hardware forums(Anandtech, for example). For about 8 months people rumbled about all this.
     
  15. 952863, Apr 11, 2016
    Last edited: Apr 11, 2016

    952863 Suspended

    Joined:
    Mar 30, 2015
    #16
    Apple went with AMD because AMD offered a cheap deal. It was all about the $$$. They should have gone with Nvidia IMO. They make much more reliable graphics cards, and they run at cooler temperatures.

    But, Apple should allow people to choose between AMD or Nvidia. I even talked to the feedback team at Apple about this, and even they thought it was a good idea. And, that choice should be across all their product line. From iMac's to Macbook Pro's.
     
  16. koyoot macrumors 601

    koyoot

    Joined:
    Jun 5, 2012
    #17
    Then explain to me. How 120W GPU can operate on lower temperatures than 120W GPU? How can one be less efficient than another?

    GTX980M and R9 M395X - both have exactly the same TDP rating.
     
  17. mildocjr macrumors 65816

    #18
    I was thinking about arguing the point of one might run at a higher speed and require more voltage more often than the other, but I'll leave that up to someone more versed in GPUs.

    As for TDP, that's just how much heat it can take, if they are both designed around the same blueprint then the TDP wouldn't be that much different, 120W is just how much power it needs to run at max, what is more important to Apple is which one can run with the least amount of voltage being pushed to it because if you're staring at the desktop and one requires 80W of power and the other requires 75W of power, well the one that uses only 75W of power will get you the most battery life.
     
  18. 952863 Suspended

    Joined:
    Mar 30, 2015
    #19
    980M is a much superior card. That should have been in iMac's as far as I am concernced.
     
  19. koyoot macrumors 601

    koyoot

    Joined:
    Jun 5, 2012
    #20
    By what measure? Compute - weaker. Graphics without any bottlenecks - weaker. Thermal envelope and power - the same.
    Base clock for 1536 CUDA cores from Maxwell Architecture is 1035 MHz, with Turbo state 1127. TDP is for base clock(similar Turbo mode to Intel CPUs). So for very short amount of time it will be boosted, but will rapidly go back down again.

    Both GPUs consume the same amount of power and produce the same amount of heat. Because both are power gated. Thats how currently TDP works.
     
  20. Fancuku macrumors 6502a

    Fancuku

    Joined:
    Oct 8, 2015
    Location:
    PA, USA
    #21
    This.
    Ain't no two ways about it.
     
  21. 952863, Apr 11, 2016
    Last edited by a moderator: Apr 27, 2016

    952863 Suspended

    Joined:
    Mar 30, 2015
    #22
    By looking at benchmarks. 980M is a better card.
     
  22. mildocjr macrumors 65816

    #23
    Well, I guess all that's left is simply pricing and logistical.

    Yep, try to come up with something more constructive instead of something that all businesses do to make money.

    Windows manufacturers just waste your time by throwing on a bunch of bloatware that helps them cut costs. Otherwise you end up paying the same price for an equivalent <insert manufacturer here> Signature series laptop.
     
  23. koyoot macrumors 601

    koyoot

    Joined:
    Jun 5, 2012
    #24
    No it isn't. GTX 980M is based on GTX 970 desktop. And according to DX12 benchmarks R9 380X which is exactly the same die as R9 M395X is equal to GTX 970. The problem is that GTX 980M is cut down version of GTX 970, and with lower clocks to maintain it in 120W of TDP. R9 M395X has 909 MHz core clock with exactly the same amount of GCN cores as R9 380X, but with lower core clock: 980 MHz, vs 909. There is no way it can be faster with current drivers and in current environment that R9 M395X.

    Also compute is much lower on GTX 980M: 3.1 TFLOPs vs. 3.7 TFLOPs from R9 M395X. The only thing that makes GTX980M better is CUDA. Proprietary software. Nothing else.
     
  24. mildocjr macrumors 65816

    #25
    Umm, you kind of argued against yourself, because a lower clock means it's slower, but it's boost is higher, which means it outperforms the 980 at boost, with the core clock being 909 vs 980 then it requires less voltage. Apple will pick energy-savings over performance every time.

    Also yes NVidia is proprietary but so is Apple, so what's the point in that argument? The fact that NVidia is proprietary means it's that much more difficult for Apple? A driver is a driver is a driver, it's all about hardware requirements and what uses the least for the most equivalent power in the sub-tier.
     

Share This Page