Wondering what the hive thinks of Apple using NVidia's GPU over ATI's GPU. I recall Apple being a huge fan of Open GL, but some of my work is leading me down the path of NVidia's CUDA.
TIA
TIA
Wondering what the hive thinks of Apple using NVidia's GPU over ATI's GPU. I recall Apple being a huge fan of Open GL, but some of my work is leading me down the path of NVidia's CUDA.
Personally, I believe there is not much chance. Apple and Nvidia don't seem to have a good business relation as of late and Apple has heavily invested into AMD GPUs with their hardware. And then, Nvidia doesn't really have anything better to offer in the bracket that Apple is interested in. In the lower TDP departments, AMD Polaris and Nvidia Pascal perform very similarly, and we will probably see a decent boost with a mobile low-TDP Vega next year...
Ugh, you are validating what my instincts have been dancing around. I hope this isn't the final nail in the coffin for me being an Apple customer
If thats sufficient for you, an eGPU is always a possibility. But if your work relies on CUDA heavily, then Apple doesn't offer any tools in that department.
Personally, I think that CUDA is a bad thing for the entire industry and I'd be happy to see it die. Then again, there is no question that Nvidia's marketing department was very smart in seizing the opportunity and that there currently is no viable alternative. Especially since Nvidia is actively sabotaging any other development that could make CUDA use its ground. On Apple's platform, you can and should use Metal, which is just as user friendly as CUDA (if not more user friendly), but still lacks features.
Wondering what the hive thinks of Apple using NVidia's GPU over ATI's GPU. I recall Apple being a huge fan of Open GL, but some of my work is leading me down the path of NVidia's CUDA.
TIA
Same situation here, and tbh I don't like this one bit. Not just as an Apple user, but also because I think a situation where an entire engineering/scientific field depends on the products of a single vendor is inherently wrong. I also looked into that AMD framework you mention, hopefully it will be a serious attempt at giving Nvidia some competition (at the moment it looks very limited in terms of platform compatibility, though). But for the time being, there is unfortunately no CUDA alternative that really works.I wish Apple would put a GTX 1060 in their notebooks, but I do not see that happening. I do Machine Learning and right now CUDA is the only game in town for GPU acceleration. But, AMD does have a new framework that is supposed API compatible with CUDA and open source, so maybe someday.
Sadly no because Apple is Apple and they'll never admit they are wrong to cram in the graphics cards 1/2 of the current Sufrace Book 2 power and charge you twice as much for it.
Wondering what the hive thinks of Apple using NVidia's GPU over ATI's GPU. I recall Apple being a huge fan of Open GL, but some of my work is leading me down the path of NVidia's CUDA.
I wish Apple would put a GTX 1060 in their notebooks, but I do not see that happening. I do Machine Learning and right now CUDA is the only game in town for GPU acceleration. But, AMD does have a new framework that is supposed API compatible with CUDA and open source, so maybe someday.
Did you have a look at Apple's ML framework? https://developer.apple.com/machine-learning/ I don't do ML myself, so I have no idea how good it is, but its something they use quite heavily across the OS.
For you folks who are in to ML, check out these blogs (if you haven't already):
https://machinelearning.apple.com/
Take note that they also used TensorFlow in some of their applications...does this mean they used a NVidia card at one time?!?
I agree 100% with this. Having a Unix-based system with a polished GUI that "just worked" is what made the Mac a fantastic machine for a scientist, and despite some problems I find this to be still the case today.These responses are very disheartening. I remember when I was very active in Academia that Apple was pushing their products and targeting to the academic/research fields in that they were offering a user friendly Linux kernel that was 1000x more useful in research applications then Windows would ever be.
These responses are very disheartening. I remember when I was very active in Academia that Apple was pushing their products and targeting to the academic/research fields in that they were offering a user friendly Linux kernel that was 1000x more useful in research applications then Windows would ever be.
In today's research field, a dedicated state-of-the-art GPU is almost a necessity. I'm almost afraid that NVidia and AMD's frameworks is going to turn this into the Beta versus VHS wars of years ago where no-one really wins.
I took a look at Metal 2, and it looks very promising but may already be behind the power curve with NVidia's head-start.
For you folks who are in to ML, check out these blogs (if you haven't already):
https://machinelearning.apple.com/
Take note that they also used TensorFlow in some of their applications...does this mean they used a NVidia card at one time?!?
As I understand it. Mac OS X is BSD based not Linux.
Mac OS X is more directly base off NeXTSTEP which is BSD based. The progenitor of Mac OS X. Developed by NeXT which was founded by Steve Jobs. NeXT started development in 1985 with the first NeXTSTEP released in 1989. NeXT was later bought by Apple. Bringing back Steve Jobs and NeXTSTEP began the transition to Mac OS X under the code name Rhapsody.
Linux began development in 1991. The Linux kernel was created by Linus Torvalds.
OS X is more closely related to FreeBSD.
A very good reading and pretty much spot-on, thanks for sharing. As a machine learning and scientific computing guy, I have long been hoping to see some genuine competition in the field and combat Nvidia's price gouging, and this could be the right moment if other vendors play their cards well (pun intended).This seems very timely:
http://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/
Intel's new Nervana NNP chip sounds pretty amazing. What an awesome yet terrible time to be involved with all this...
This seems very timely:
http://timdettmers.com/2017/12/21/deep-learning-hardware-limbo/
Intel's new Nervana NNP chip sounds pretty amazing. What an awesome yet terrible time to be involved with all this...