Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Flint Ironstag

macrumors 65816
Original poster
Dec 1, 2013
1,330
743
Houston, TX USA
Learned scholars, I have a Mac Pro that I'm dedicating to experimenting with password hacking / auditing. I understand that ATI cards are best suited to this task. I'm hoping you can help me make an informed decision on a best bang for the buck purchase. Specs of the machine:

- 3,1
- 16GB
- 256GB SSD
- Radeon 5770

I've taken into consideration extra power. The 5770 will remain in the machine for boot screens, etc. I'm not afraid of flashing a PC card, but don't see any compelling need since I'm keeping the 5770.

So what do you guys think? Would probably like to spend $300 tops.

Thanks in advance!
 

WildBB

macrumors member
Mar 13, 2014
38
0
Pandora
Learned scholars, I have a Mac Pro that I'm dedicating to experimenting with password hacking / auditing. I understand that ATI cards are best suited to this task. I'm hoping you can help me make an informed decision on a best bang for the buck purchase. Specs of the machine:

- 3,1
- 16GB
- 256GB SSD
- Radeon 5770

I've taken into consideration extra power. The 5770 will remain in the machine for boot screens, etc. I'm not afraid of flashing a PC card, but don't see any compelling need since I'm keeping the 5770.

So what do you guys think? Would probably like to spend $300 tops.

Thanks in advance!

To preface, my understanding of this field is minimal. I believe you need a card that can perform a high number of modular arithmetic computations per second. NVida has great cards that can handle modular arithmetic far better than most CPUs.

Nvida uses CUDA Computing Platform:
http://www.nvidia.com/object/cuda_home_new.html

At that price range, I would get a GTX680 for the Macpro 3,1.

I do have a Mac3,1 and a GTX680 :D
 

WildBB

macrumors member
Mar 13, 2014
38
0
Pandora
I have seen several papers that referenced using the CUDA platform (Nvidia) to perform these cryptanalysis computations. Good luck in your learned process. :)
 

Attachments

  • phdthesis-schwabe-printed.pdf
    958.2 KB · Views: 172
  • niederhagen-thesis-printed.pdf
    1.1 MB · Views: 158
  • kzhao.pdf
    126.1 KB · Views: 165
  • 652.pdf
    261 KB · Views: 201

goMac

Contributor
Apr 15, 2004
7,662
1,694
To preface, my understanding of this field is minimal. I believe you need a card that can perform a high number of modular arithmetic computations per second. NVida has great cards that can handle modular arithmetic far better than most CPUs.

Nvida uses CUDA Computing Platform:
http://www.nvidia.com/object/cuda_home_new.html

At that price range, I would get a GTX680 for the Macpro 3,1.

I do have a Mac3,1 and a GTX680 :D

OP is correct. The AMD cards are better at GPGPU work across the board right now. I'd go with an AMD card. The OpenCL performance on AMD is better than anything you could get out of CUDA on Nvidia.

Plus, using OpenCL means he could use both the CPU and GPU together at once. Might be helpful for design/theory and benchmarking.

FYI the 5770 isn't the worst GPGPU performer, but it's about half as slow as the 5870, and probably 1/3 to 1/4 the speed of a modern card.
 

paulrbeers

macrumors 68040
Dec 17, 2009
3,963
123
Learned scholars, I have a Mac Pro that I'm dedicating to experimenting with password hacking / auditing. I understand that ATI cards are best suited to this task. I'm hoping you can help me make an informed decision on a best bang for the buck purchase. Specs of the machine:

- 3,1
- 16GB
- 256GB SSD
- Radeon 5770

I've taken into consideration extra power. The 5770 will remain in the machine for boot screens, etc. I'm not afraid of flashing a PC card, but don't see any compelling need since I'm keeping the 5770.

So what do you guys think? Would probably like to spend $300 tops.

Thanks in advance!

Go with something like an AMD Radeon 270X. They are running the Tahiti core which is faster that Pitcarin (used in the 270). Not as good as Hawaii, but it's fairly fast and can be had at Newegg for as low as $220. The Radeon 280 runs right around $300 and is pretty comparable to the D500 used in the new Mac Pro....
 

goMac

Contributor
Apr 15, 2004
7,662
1,694
I meant relevant examples. That is showing bitcoin mining and two obscure OpenCL renderers that generally are only used for benchmarks.

If you have a specific example in mind, you're welcome to search for one and post it. LuxMark is a pretty well accepted indicator of OpenCL performance as it's performing actual operations, but if you have something better in mind, look it up.
 

paulrbeers

macrumors 68040
Dec 17, 2009
3,963
123
I meant relevant examples. That is showing bitcoin mining and two obscure OpenCL renderers that generally are only used for benchmarks.

When it comes to OpenCL, AMD blows the pants off of Nvidia in all cards but the Quadro cards and the new Maxwell cards appear to be fairly competitive in OpenCL and Luxmark is pretty much the gold standard for OpenCL bencharmking....

Now if the software was designed around CUDA, then of course OpenCL is a worthless benchmark, but it appears that many software titles are moving away from CUDA to OpenCL simply because OpenCL is an open standard that AMD and Intel already fully support and Nvidia appears to be adding better support for (based on Maxwell's increase in OpenCL benchmarks).

However, there is a reason though Crytpominers use AMD cards....
 

beaker7

Cancelled
Mar 16, 2009
920
5,010
When it comes to OpenCL, AMD blows the pants off of Nvidia in all cards but the Quadro cards and the new Maxwell cards appear to be fairly competitive in OpenCL and Luxmark is pretty much the gold standard for OpenCL bencharmking....

Now if the software was designed around CUDA, then of course OpenCL is a worthless benchmark, but it appears that many software titles are moving away from CUDA to OpenCL simply because OpenCL is an open standard that AMD and Intel already fully support and Nvidia appears to be adding better support for (based on Maxwell's increase in OpenCL benchmarks).

However, there is a reason though Crytpominers use AMD cards....

Ok. Will stick with nVidia. CUDA is currently the vast majority of the market for the apps I use.
 

electonic

macrumors member
Mar 18, 2014
50
8
I meant relevant examples. That is showing bitcoin mining and two obscure OpenCL renderers that generally are only used for benchmarks.

Yeah, LuxRender is so obscure. :D

OpenCL is the way to go for what the OP has in mind.
Nvidia only if you must use specific CUDA Apps, and even in the creative field software like Premiere, etc., is starting to support OpenCL now.

Especially with the tasks in mind - passwords - AMD is faster.
 

electonic

macrumors member
Mar 18, 2014
50
8
It is most important to look at the software used and decide according to that.

AMD is in general the better Computing Platform right now, but there is of course specialized software for CUDA.

It just depends, again, on the software used.
Base your decision on that.
 

goMac

Contributor
Apr 15, 2004
7,662
1,694
Here at the crypto research group, they use Nvidia. (CUDA)

There really isn't much difference between CUDA and OpenCL, besides one being locked to Nvidia and one being open.

What I typically find is that code/projects that was written before OpenCL was available might still be using CUDA. Nvidia gave away a lot of hardware and sponsored a lot of research programs, which helps them with that initial lock in.

But all the new stuff I'm seeing is all OpenCL. I haven't seen much in the way of new CUDA based projects.

Especially if OpenCL on AMD hardware can give you much better performance than CUDA on Nvidia hardware.
 

Flint Ironstag

macrumors 65816
Original poster
Dec 1, 2013
1,330
743
Houston, TX USA
Thanks for the input

Think I will stick with AMD for the following reasons:

1. promote an open platform with my $ (no more Blu-Ray vs HDDVD wars)
2. most of the open source apps I plan to try perform better on AMD


Here at the crypto research group, they use Nvidia. (CUDA)

I am curious though - dollystereo, can you elaborate? Apps, what type of problems are being tackled, performance, etc.
 

FluJunkie

macrumors 6502a
Jul 17, 2007
618
1
There really isn't much difference between CUDA and OpenCL, besides one being locked to Nvidia and one being open.

What I typically find is that code/projects that was written before OpenCL was available might still be using CUDA. Nvidia gave away a lot of hardware and sponsored a lot of research programs, which helps them with that initial lock in.

But all the new stuff I'm seeing is all OpenCL. I haven't seen much in the way of new CUDA based projects.

Especially if OpenCL on AMD hardware can give you much better performance than CUDA on Nvidia hardware.

Beyond what you're suggesting, CUDA was established earlier, which means software was written, and clusters bought and configured with CUDA in mind.

OpenCL might be the cat's pajamas, there's a huge existing infrastructure you have to contend with.
 

goMac

Contributor
Apr 15, 2004
7,662
1,694
Beyond what you're suggesting, CUDA was established earlier, which means software was written, and clusters bought and configured with CUDA in mind.

OpenCL might be the cat's pajamas, there's a huge existing infrastructure you have to contend with.

Not really. OpenCL runs on the same hardware as CUDA, and there isn't much of a theoretical difference in performance. I've been involved with projects under one of those NVidia sponsorships before, so I know how it goes. That's not a knock against NVidia, just the reality of why things are the way they are.
 

beaker7

Cancelled
Mar 16, 2009
920
5,010
All these supercomputer clusters running Tesla's and CUDA applications should read MacRumors to get educated.
 

goMac

Contributor
Apr 15, 2004
7,662
1,694
All these supercomputer clusters running Tesla's and CUDA applications should read MacRumors to get educated.

Actually, from what I've heard, AMD Fusion is the next big thing for supercomputer clusters.

I'd strongly question how "educated" and connected someone who thinks supercomputer clusters, especially these days, are using using Nvidia cards is.

That's not to say Nvidia for a long time didn't have a performance advantage. I did a lot of CUDA work on Tesla cards. But that advantage is pretty much gone.

If you're building a cluster today, you have a lot of good contenders. AMD Fusion, Xeon Phi, discrete AMD cards...

Sandia National Labs (who apparently, according to you, have no idea what they're doing) chose AMD Fusion for one of their newest clusters over Nvidia, probably because Nvidia is not really the performance champ right now:
http://archive.hpcwire.com/hpcwire/...r_with_amd_fusion_chips_debuts_at_sandia.html
 

FluJunkie

macrumors 6502a
Jul 17, 2007
618
1
Not really. OpenCL runs on the same hardware as CUDA, and there isn't much of a theoretical difference in performance. I've been involved with projects under one of those NVidia sponsorships before, so I know how it goes. That's not a knock against NVidia, just the reality of why things are the way they are.

By "infrastructure" I also meant software. While you could knock out OpenCL code on Nvidia-based clusters and be fine, running up against legacy software is a bigger challenge, because it means man-hours recreating work that's already been done.

In academia at least, that's a problem.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.