iMac m295x beating the D700?

Discussion in 'Mac Pro' started by N19h7m4r3, Oct 22, 2014.

  1. N19h7m4r3, Oct 22, 2014
    Last edited: Oct 22, 2014

    N19h7m4r3 macrumors 65816

    N19h7m4r3

    Joined:
    Dec 15, 2012
    #1
    It seems the new Tonga base m295x could be a little power house.

    Danny Winget ran Cinebench on his new 5K iMac with i7 4790K and m295x and got a Cinebench score of 105FPS in OS X.

    [​IMG]

    For reference here's my D700's in OS X 10.9.2, and Windows 8.1 in Cinebench.
    [​IMG]

    His Mac Pro with D500's gets only
    [​IMG]

    His video of it.


    I find that very impressing to say the least, and wonder how the m295x is going to do in other tests once proper in-depth reviews start coming out.
     
  2. Pressure macrumors 68040

    Pressure

    Joined:
    May 30, 2006
    Location:
    Denmark
    #2
    They should match in Yosemite, they both have 2048 stream processors.
     
  3. theSeb macrumors 604

    theSeb

    Joined:
    Aug 10, 2010
    Location:
    Poole, England
    #3
    Unigine valley ExtremeHD, done on different platforms (unfortunately I don't have the figures for Windows for the m295x yet)
     

    Attached Files:

  4. xav8tor macrumors 6502a

    Joined:
    Mar 30, 2011
    #4
    Last time I checked, CB in Open GL is a CPU limited benchmark and you've got to account for that too. The 4790k is at the top of the heap in single thread performance, and again, it has been my understanding that applies to CB running OpenGL.
     
  5. N19h7m4r3 thread starter macrumors 65816

    N19h7m4r3

    Joined:
    Dec 15, 2012
    #5
    Well the 4790K loses to my 3.5Ghz 6 Core xeon in Cinebench, while my D700 loses in FPS to the m295x in it.
     
  6. theSeb macrumors 604

    theSeb

    Joined:
    Aug 10, 2010
    Location:
    Poole, England
    #6
    You're now talking about two different tests. The CPU test is multithreaded and so the Mac Pro should and does beat the 4790k. The OpenGL, on the other hand, apparently only uses a single thread and the GPU.

    The 4790 will beat your Mac Pro Xeon in single threaded stuff and will lose in multi-threaded stuff when it comes to CPU tasks.
     
  7. N19h7m4r3 thread starter macrumors 65816

    N19h7m4r3

    Joined:
    Dec 15, 2012
    #7
    Ah okay, I'm just quoting the scores from the tests. Nothing else.
     
  8. theSeb macrumors 604

    theSeb

    Joined:
    Aug 10, 2010
    Location:
    Poole, England
    #8
    I am not 100% sure how the OpenGL test in Cinebench works and I have never bothered to find out, so I am trusting xav8tor, but what he said makes perfect sense.

    From what I've seen so far, the M295X has very similar performance to a single D700 in OS X.
     
  9. N19h7m4r3 thread starter macrumors 65816

    N19h7m4r3

    Joined:
    Dec 15, 2012
    #9
    Well I just ran Cinebench 4 times, and did reboot after the first attempt because i only got 46fps.

    This is the best result out of 4 runs. Yosemite has dropped my performance.
    During the Beta I got improved OpenGL performance, especially in games like Hitman Absolution and Tomb Raider where my minimum FPS more than doubled.

    Something odd going on here. Yosemite 10.10 was also cleanly installed and not upgraded to from Mavericks or the Betas.

    [​IMG]
     
  10. theSeb macrumors 604

    theSeb

    Joined:
    Aug 10, 2010
    Location:
    Poole, England
    #10
    You're right, regarding Cinebench and something odd, because I have the exact same nMP as you and I get similar results in Yosemite. However, looking at my Unigine scores across different platforms, I am not sure that Yosemite is the culprit. It would be good though if we could find someone with a nMP with the same config and still on Mavericks. Then we could compare the Unigine Valley ExtremeHD scores between Yosemite and Mavericks. I've never really trusted Cinebench as a reliable benchmark.
     
  11. N19h7m4r3 thread starter macrumors 65816

    N19h7m4r3

    Joined:
    Dec 15, 2012
    #11
    Hopefully there is someone, although the difference in FPS nearly 30 is alarming to say the least. As it stands the m295X nearly matches a GTX 780 Ti in cinebench with just a 7fps difference.

    We might still have to wait for more in-depth reviews from the likes of Anandtech as well.

    From tonymacs
    http://www.tonymacx86.com/user-buil...igabyte-gtx-780-ti-windforce-oc-32gb-ram.html
    [​IMG]
     
  12. theSeb macrumors 604

    theSeb

    Joined:
    Aug 10, 2010
    Location:
    Poole, England
    #12
    In my mind, the 780ti with a 4770k result proves what xav8tor was saying and the 4790k in the riMac is inflating the OpenGL score in Cinebench to ludicrous levels. There is no way that the m295X can match a 780ti.

    Look at the chart I made and posted above. The m295X is just a little bit faster than a 780M, so for it to be nearly as fast as a 780ti simply makes no sense, considering what we know about the specs of a 780ti and the m295X, unless we take xav8tor's explanation into account.
     
  13. N19h7m4r3, Oct 22, 2014
    Last edited: Oct 22, 2014

    N19h7m4r3 thread starter macrumors 65816

    N19h7m4r3

    Joined:
    Dec 15, 2012
    #13
    Yes, we all need more information. What's going on with it's score, and why is the D700 suddenly doing much worse.

    I'm going to be downloading Unigine to see what I get there as well.

    EDIT: Looking at barefeets the m295x is damn good card for gaming in OS X at 2560x1440.

    Nearly double the FPS in Tomb Raider which uses OpenGL 4.0. Just a tad faster than the 780m in Diablo which is OGL 3.2

    It's certainly faster at OpenCL anyway.
     

    Attached Files:

  14. netkas macrumors 65816

    Joined:
    Oct 2, 2007
    #14

    Over 2 thousand points in unigine valley on GTX Titan on MacPro 2008 here.:rolleyes:
     
  15. xav8tor macrumors 6502a

    Joined:
    Mar 30, 2011
    #15
    Here again, you've got to look into each app and determine the relative importance of cores/threads used, CPU gen and clock speed, GPU drivers, workstation or consumer/gaming GPU, OpenCL, OpenGL, DirectX, and so on. Until you do that, it's an exercise in frustration.

    One thing is for sure, in most games, they simply will not run as well on a workstation (i.e., Xeon/FirePro) as they will on a gaming rig/consumer desktop, and Windows is a benefit to many. Right now, a 4790k paired with a GTX 980 is going to stomp just about everything else out there in terms of gaming, and in more than a few pro apps not yet optimally multithreaded (where possible). Overclock that sucker and you've really got a beast except for things like HD/4K rendering. Even there, it's not too shabby.
     
  16. N19h7m4r3 thread starter macrumors 65816

    N19h7m4r3

    Joined:
    Dec 15, 2012
    #16
    The points are useless, what are the minimum, average and max FPS.
    The overall points make no sense as there the D700's have better Minimum FPS, and the same Max FPS but the GTX 780 Ti get better points.

    Since the D700's score essentially double the minimum FPS, and the same on the Max. The Point difference makes no sense, nor does the average FPS since the 780 Ti hits a lower fps.


    [​IMG]

    [​IMG]
     
  17. riggles macrumors 6502

    Joined:
    Dec 2, 2013
    #17
    Over the years, I've come to look at Cinebench results as nothing more than a judge of which machine would perform better in C4D. And since I don't use C4D anymore, it doesn't tell me anything I can make a decision based on.
     
  18. stmp, Oct 22, 2014
    Last edited: Oct 22, 2014

    stmp macrumors member

    Joined:
    Jul 17, 2012
    #18
    Hope this isn't too off topic, but is there any correlation here to the Desktop 290X?

    More specifically, can we assume that 10.10 drivers that support the m295X will support a desktop R9 290X desktop version in a cMP?

    Thanks in advance.

    Edit:

    Just saw on netkas people are working on it. Seems like R9 290X works fine, still no boot screens or PCIe 2.0 as of 22 October
     
  19. xav8tor macrumors 6502a

    Joined:
    Mar 30, 2011
    #19
    Repeat after me: Workstation computers and GPU's are not made for gaming...
     
  20. theSeb macrumors 604

    theSeb

    Joined:
    Aug 10, 2010
    Location:
    Poole, England
    #20
    You've just posted unigine heaven and we were talking about valley :)

    Also, the preset must be extremeHD and not a custom one.
     
  21. netkas macrumors 65816

    Joined:
    Oct 2, 2007
    #21
    Wrong.

    Min fps could be hit just once durring testrun due to some driver's issue or some external (for vga) issue. like cpu been utilized a lot by another process (like disk indexing) for few seconds or it was scene loading, anything. It's just one moment needed to break minfps result... Max fps is just the miminal time to render some random frame (maybe it was clear sky with no objects), very useless. No review ever compares max fps...

    Average fps is best indicator for benchmarks, and points actually represent that.
     
  22. N19h7m4r3 thread starter macrumors 65816

    N19h7m4r3

    Joined:
    Dec 15, 2012
    #22
    Ah sorry, Just saw Unigine. :eek:
     
  23. benjobe2513 macrumors member

    benjobe2513

    Joined:
    Sep 10, 2008
    Location:
    Humboldt County, California
    #23
    Thank you for sharing this! It gives me hope!

    I recently upgraded to Yosemite and a MSI Lightening 290X 4GB in a Cubix Xpander connected to my 2010 Mac Pro. I've been getting worse performance than with my old PowerColor 280X 3GB. About 25% less performance to be more accurate, in Adobe Premiere Pro and Media Encoder. I was wondering why until now. I hope the savvy sleuths over at Netkas can discover a way to unlock PCIe 2.0
     
  24. 666sheep macrumors 68040

    666sheep

    Joined:
    Dec 7, 2009
    Location:
    Poland
    #24
    Barefeats tests are a bit unfair GPU-wise. 295 should compete with D700, D300 is more like M290X. I assume that CPU was the priority.
     
  25. koyoot macrumors 603

    koyoot

    Joined:
    Jun 5, 2012
    #25
    These benchmarks are getting more, and more crazy.

    Especially the one from the first post. One shows that this GPU is a beast, next one that its hardly an upgrade from last year.

    Come on, give some clarity!
     

Share This Page