Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I am in the academic sector myself and know unfortunately all about the resource constraints and its particular economy. If you can argue for cost-efficiency, there is no problem to obtain grants for workstations irrespective of brand and cost. Lunch is over, so back to writing grant applications. I can assure you that if sequencing DNA or a fluorescent microscope was as cheap as Apple computers, I would be glad! It is all a matter of perspective.

Yes, perspective. Sequencing has gotten cheap enough and fast enough that computation time is now a major bottle neck for many projects. Which is to say, in an ideal world, I'd have enough available on-demand computation to process new projects ASAP so to not slow the science. But reality is, my local workstations can only chug though so much so fast while we wait on queues on big clusters. Waiting on computers then becomes inefficient use of salaries for those in lab that might need results to make choices on what to do next (see how that flips relative to "tinkering"...). Data storage in places with easy and fast access is also getting more and more expensive. Buying any individual computer is just one of several choices made that cumulatively have a large effect. And if I made them with the "screw it computers are cheap" mindset, eating Apple tax on every one, I'd spend a nontrivial extra chunk of money on computers rather than getting those aims accomplished. So, what its sounding like is I just need more computation than you do, so my perspective is a bit different. These aren't one-offs. Computation costs are continuous and add up on those grant balance sheets. The example of my possible machine upgrades is just one that gets played out often several times a year, especially when those cluster queues get deep.... we grow impatient. Unfortunately, that amount of computation is not easy to get funded (we have rules like computers are <$5K otherwise its a different type of expense, etc).
 
Back when the Power Mac and PowerBook were evidently everything a pro could want and said pros bought them (based on this forum), Apple sold a lot less Macs then they do now with a Mac Pro and MacBook pro that no pro could want (again, based on this forum).

So is it really that HUGE a middle-ground pro market?

Yes. The middle ground pro market has changed since the late 90's. It's way, way bigger now.
 
  • Like
Reactions: Flint Ironstag
Yes, perspective. Sequencing has gotten cheap enough and fast enough that computation time is now a major bottle neck for many projects. Which is to say, in an ideal world, I'd have enough available on-demand computation to process new projects ASAP so to not slow the science. But reality is, my local workstations can only chug though so much so fast while we wait on queues on big clusters. Waiting on computers then becomes inefficient use of salaries for those in lab that might need results to make choices on what to do next (see how that flips relative to "tinkering"...). Data storage in places with easy and fast access is also getting more and more expensive. Buying any individual computer is just one of several choices made that cumulatively have a large effect. And if I made them with the "screw it computers are cheap" mindset, eating Apple tax on every one, I'd spend a nontrivial extra chunk of money on computers rather than getting those aims accomplished. So, what its sounding like is I just need more computation than you do, so my perspective is a bit different. These aren't one-offs. Computation costs are continuous and add up on those grant balance sheets. The example of my possible machine upgrades is just one that gets played out often several times a year, especially when those cluster queues get deep.... we grow impatient. Unfortunately, that amount of computation is not easy to get funded (we have rules like computers are <$5K otherwise its a different type of expense, etc).
In biomedicine, we always underestimate the need for computers and seldom factor that in. I use the heading "lab computer" and can order what I want assuming the grant giving organisation agrees. However, I am not allowed to buy a office computer for more than 3k USD. Grants are always too small so being careful with the funding is important. I doubt I would use macs as computational computers unless there is a good economic argument. Cost for the computer is only one parameter to consider. However, people are a bit limited in their thinking regarding costs and seldom factor in everything.

I don´t know if sequancing is so cheap. I still can not do what I really want due to economic restraints but it is getting better!
 

I think we talked about this over the past few months. Every month or so they have a freakout and give up, and then they turn around and get this stuff done.

Insane work by the folks at OTOY.

Now they need it running on Mac OS. I read on one of my CG forums that the Mac in that video is running windows. I could be wrong, and I really hope it was running on Mac OS. :)

Edit: yup! Running in boot camp on that Mac.
https://www.facebook.com/photo.php?fbid=10155434314819847&set=pcb.10155434325589847&type=3&theater

Great performance for the previous gen iMac video card, though. The newer Radeon 580 in the 2017 models should see a nice boost.
 
Last edited:
Wait, I thought that CUDA was not welcome in mac platform, it was not needed because it was so closed and proprietary, it would be bad for macOS blah blah etc etc. Now that it could potentially run on AMD suddenly makes it good ? Moving the goalposts too much ?
 
Now that it could potentially run on AMD suddenly makes it good
and on Intel too.
why is that bad?

what goal posts are moved?
[doublepost=1500352775][/doublepost]
it would be bad for macOS
like, in actual macOS in a similar way as say, Python?
yes, would be bad.

but CUDA already supports macOS.. i don't recall anyone trying to say that was a bad thing.
?
 
  • Like
Reactions: Jack Burton
Wait, I thought that CUDA was not welcome in mac platform, it was not needed because it was so closed and proprietary, it would be bad for macOS blah blah etc etc. Now that it could potentially run on AMD suddenly makes it good ? Moving the goalposts too much ?
It is only possible with OTOY's Octane Renderer in Windows. For now. Maybe in future also available for Mac. It will never be available anywhere else - I mean for any other application. That is why it is proprietary standard.
 
and on Intel too.
why is that bad?

Never said the words you quoted and replied :p

like, in actual macOS in a similar way as say, Python?
yes, would be bad.
but CUDA already supports macOS.. i don't recall anyone trying to say that was a bad thing.
?

So today we stopped fearing that nVidia and their proprietary evil APIs will 'invade' into macOS garden ? (your words)
nVidia officially returning to Mac; bad.
Cuda available on Mac while still equipped with AMD; good.
:rolleyes:

It is only possible with OTOY's Octane Renderer in Windows. For now. Maybe in future also available for Mac. It will never be available anywhere else - I mean for any other application. That is why it is proprietary standard.

Fair enough regarding the technical part. I was just referring to the replies I got when asked a few posts above why it would be so awful if macs offer the option to choose between gpu vendors instead of ignoring the one of the two. More or less, I got back the suggestions that nvidia and their cuda would leave - more or less - the mac heavenly ecosystem in ruins.
 
Fair enough regarding the technical part. I was just referring to the replies I got when asked a few posts above why it would be so awful if macs offer the option to choose between gpu vendors instead of ignoring the one of the two. More or less, I got back the suggestions that nvidia and their cuda would leave - more or less - the mac heavenly ecosystem in ruins.
There is choice. Apple can use either Nvidia and AMD GPUs. But there is no choice of API you can use on Apple ecosystem - Metal 2.
 
So today we stopped fearing that nVidia and their proprietary evil APIs will 'invade' into macOS garden ? (your words)
nVidia officially returning to Mac; bad.
Cuda available on Mac while still equipped with AMD; good.
:rolleyes:
i think you may be skewing other's ideas or words into something other than what's being said.

saying another way-- there's zero reason why macOS should support CUDA.. it's entirely unnecessary.

it feels as if you're getting caught up in an argument that you think is about Apple fanbois (or something) which is making you miss what's actually being said.

but just so we're clear-- do you think Windows should also let CUDA into their garden (your words)? or is CUDA support fine as is on Windows and it's only Apple who should license nVidia's software and have it included in every macOS (and i assume) iOS install?
[doublepost=1500381406][/doublepost]and hey, just so we're clear about something else-
(and i've said this before around here on a few occasions)

i don't care -- AMD vs nVidia
like--> i really don't care.. i'll use either with no qualms about either.



currently, i have two macs.. both have nVidia GPUs in them.. prior to these, i had AMD.

as an end user, there is no freaking difference between using one or the other in nearly every use case. (ALL of my personal use cases)

---
you seem to keep placing me into some sort of nVidia vs Apple vs AMD battle and you're not recognizing that i'm not speaking from those lines..
again, i don't care nVidia or AMD.. they're equal
they'll both run my software equally well.
 
Last edited:
  • Like
Reactions: ixxx69
Wait, I thought that CUDA was not welcome in mac platform, it was not needed because it was so closed and proprietary, it would be bad for macOS blah blah etc etc. Now that it could potentially run on AMD suddenly makes it good ? Moving the goalposts too much ?

This is translating CUDA to OpenCL. Which is fine, it won't really impact performance. But it's not really CUDA on AMD. It's just fixing the oooops of having written in a closed Nvidia language to begin with.

On the Mac dunno if they'll translate to OpenCL or Metal. Half ass OpenCL support could be the reason they demoed on Windows.

If more vendors run translation tools to get out of CUDA I don't see how that's inconsistent with previous posts.
 
This is translating CUDA to OpenCL. Which is fine, it won't really impact performance. But it's not really CUDA on AMD. It's just fixing the oooops of having written in a closed Nvidia language to begin with.

On the Mac dunno if they'll translate to OpenCL or Metal. Half ass OpenCL support could be the reason they demoed on Windows.

If more vendors run translation tools to get out of CUDA I don't see how that's inconsistent with previous posts.
It would be moronic to use CUDA, or OpenCL if Metal 2 is the API on Apple platform, and has highest support by Apple.
 
This is translating CUDA to OpenCL. Which is fine, it won't really impact performance. But it's not really CUDA on AMD. It's just fixing the oooops of having written in a closed Nvidia language to begin with.

On the Mac dunno if they'll translate to OpenCL or Metal. Half ass OpenCL support could be the reason they demoed on Windows.

If more vendors run translation tools to get out of CUDA I don't see how that's inconsistent with previous posts.

It does impact performance if there isn't a 1:1 relation between the two API functions performance. If, say, one CUDA function takes n cycle to execute and the translated OpenCL function takes n+x cycles then you'll get a penalty in your performance. The opposite is also true.
 
  • Like
Reactions: antonis
It does impact performance if there isn't a 1:1 relation between the two API functions performance. If, say, one CUDA function takes n cycle to execute and the translated OpenCL function takes n+x cycles then you'll get a penalty in your performance. The opposite is also true.

I'm not aware of any "secret sauce" functions in CUDA that would cause that to happen, especially with Metal 2 or OpenCL on Windows. I don't think there is any custom acceleration hardware that only CUDA has access to.

You might run into problems with Metal 1 or Apple's old OpenCL where you just couldn't do things and had to go a long way around, but Metal 2 has really beefed up the ability to write custom functionality at a low level, even if there isn't a 1:1 function map.

Short version: There isn't really any reason Metal or CUDA can't generate the exact same shader byte code.
 
It does impact performance if there isn't a 1:1 relation between the two API functions performance. If, say, one CUDA function takes n cycle to execute and the translated OpenCL function takes n+x cycles then you'll get a penalty in your performance. The opposite is also true.
What if what takes on CUDA architecture N+X cycle, takes N cycles on AMD GCN, in this particular case?
 
That was done in bootcamp using Windows.

Just sayin'

Yup. Hoping they get it working on the Mac. The octane people complained that Open CL on the Mac wasn't ready for what they needed it to do a while back. I can't remember if they blamed AMD, Apple, or both.

I don't know if things have changed where Mac users of octane can hope for AMD/nVidia playing nice at the same time - adding an external nVidia card while also using the internal AMD GPU.
 
What if what takes on CUDA architecture N+X cycle, takes N cycles on AMD GCN, in this particular case?

That's another good point. Beyond the byte code something on AMD could actually be more optimized clock for clock.

Could be true for Nvidia too, but that would have nothing to do with CUDA.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.