Mac Pro or Workstation?

Discussion in 'Mac Pro' started by jc69, Jan 23, 2013.

  1. macrumors newbie

    Joined:
    Jan 23, 2013
    #1
    I'm thinking of buying a desktop to do parallel computation with CUDA on one of those NVIDIA GPU cards (Quadro, Tesla or something like this). At the same time, I want to keep using Xcode to develop for iOS.

    My question is, is it worth (due to potential complications) to buy one Dell Precision workstation to which I can plug a Tesla/Quadro/you name it and install Mountain Lion in it or a Mac Pro would be a better option? Problem with Tesla is that it doesn't seem to have Mac OS drivers, so either way, I'm limited to some GPU's supported by Mac.

    I don't know, anyone with experience using GPU cards with CUDA and using Xcode at the same time?

    thx
     
  2. macrumors 65816

    phoenixsan

    Joined:
    Oct 19, 2012
    #2
    Seems to me....

    that both alternatives have pitfalls....

    a)-Installing the Mac OS in a PC puts you in the Hackintosh side. That side depends in customized distributions of OS X, suited to some machines. But in my experience, support for drivers can be lagging, nor up to date o plainly dont working for specific hardware, in this case, CUDA-enabled graphic cards.

    b)-Going the Mac Pro route, how you have stated, you will be restricted to the current support enabled in these machines. Support can change betweeen models/generations. But still, if you have the prowess to do, a better pool of graphic cards can be used via flashing or some sort/level of hacking.

    So, my advise is to see if the CUDA-enabled hardware of your choice is supported to some extent in the Hackintosh or Mac Pro world. About XCode, it is my presonal preference only to use/run it in Apple hardware.

    :):apple:
     
  3. Dr. Stealth, Jan 23, 2013
    Last edited: Jan 24, 2013

    macrumors 6502

    Joined:
    Sep 14, 2004
    Location:
    SoCal
    #3
    Hello......

    A Mac Pro is a Workstation. A Unix Workstation to be precise.

    I bought a Quadro 4000 as recommended by everyone (cadd & rendering) and returned it in 3 days due to pathetic CUDA performance. Only 256 CUDA cores for $800.00 bux.

    I'm now running dual 680's with 3,072 CUDA cores, 8GB vram, and this thing smokes in regards to CUDA!

    Oh.... I can run Xcode all day long too.....

    [​IMG]
     
  4. macrumors 68030

    SDAVE

    Joined:
    Jun 16, 2007
    Location:
    Nowhere
    #4
    Dang, that's a beast.

    I remember you posted it when the power supply was outside, glad it's fitting well in there.

    Congrats!

    Can you post Geekbench scores?
     
  5. macrumors 604

    theSeb

    Joined:
    Aug 10, 2010
    Location:
    Poole, England
    #5
    It is indeed a beast. Geekbench only measures specific CPU and memory performance criteria so it won't show us anything interesting about how those GPUs are performing.
     
  6. macrumors 6502

    Joined:
    Apr 17, 2006
    Location:
    Underwater
    #6
    Yup

    Yes that's indeed bad ass, pity Maxwell Render doesn't render on the GPU i expect it'd be great for Thea or Octane though. I'm holding out for the Phi although i doubt the 5.1 model can support one because of power requirements.
     
  7. macrumors 6502

    Joined:
    Sep 14, 2004
    Location:
    SoCal
    #7
    I'm running Bunkspeed Shot which utilizes all CUDA cores via Nvidia iRay. Maxwell doesn't support 3D mice so it's out of the question for me. C4D can use CUDA also.

    Sorry, didn't mean to hijack the post....
     
  8. thread starter macrumors newbie

    Joined:
    Jan 23, 2013
    #8
    Thanks for your replies, folks.

    Yes, phoenixsan, I think I'll stick to Mac for xcoding, don't want to be facing problems with xcode/pc on top of the ios programming issues. To tell the truth I never used a pc with mac osx so considering this is something I'll be doing in my free time, when I have it, I'll keep it simple = buy Mac.

    Dr Stealth, you're right, a Mac it's a workstation! Thanks for the tip about the Quadro.
     
  9. macrumors 601

    GermanyChris

    Joined:
    Jul 3, 2011
    Location:
    Here
    #9
    Ins't phi running only on linux currently?
     
  10. macrumors 6502a

    rmbrown09

    Joined:
    Jan 25, 2010
    #10
    I would absolutely say here go with the Dell option. You will save a ton of money first of all for more power depending on the route you take.

    I'm curious looking at your pic there where is the SLI bridge?
     
  11. macrumors 601

    GermanyChris

    Joined:
    Jul 3, 2011
    Location:
    Here
    #11
    No SLI support in OSX
     
  12. macrumors 603

    Joined:
    Mar 10, 2009
    #12
    Yes.... although a bit backwards. Linux runs on Phi. But what I think you were getting at, currently the only host-to-Phi-over-PCIe drivers are only on Linux. So the host option is just Linux right now.

    Not only would Apple/Intel have to flush out the host side driver stack but there is also some parallel programming tools missing from Intel's IDE stack that likely would be needed also. (e.g., there is no Intel Parallel Studio XE suite on OS X nor is there Cluster MPI support . http://software.intel.com/en-us/intel-sdp-home ) [ Not sure if it is suppose to be Apple's job but there is no Intel OpenCL support for OS X either. That is rather sad since more many Mac products that is the only option: HD4000+ ]

    ----------

    Since CUDA results are being feed back to the CPU over PCI-e (instead of video data going out the video port ) what role is SLI playing ? Over a PCI-e v3.0 bus not sure what it would be buying anyway.

    If need to grid your parallel computations even a fused memory area would cover all of the edges. The computations have to be chunked (unless relatively small) so send each chunk to appropriate card.
     
  13. macrumors 6502

    Joined:
    Sep 14, 2004
    Location:
    SoCal
    #13
    Sli


    No SLI bridge as SLI is disabled. I do no gaming. I actually work in windows about 75% of the time using Solidworks for cad and Bunkspeed Shot for rendering. Shot uses a rendering engine from Nvidia called iRay. It directly utilizes all CUDA cores available. There is no benefit from SLI and SLI can actually cause issues with iRay's performance.
     

Share This Page