Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
More like unleashed from the ballast and crappy software that is holding the platform down ;)

Other people use more than Facebook. For professional software like Altium Designer, NX Siemens, ANSYS FEA, Solidworks, Pro/Engineer, CATIA, etc. you'll need to boot another OS.
 
  • Like
Reactions: p8blr
Other people use more than Facebook. For professional software like Altium Designer, NX Siemens, ANSYS FEA, Solidworks, Pro/Engineer, CATIA, etc. you'll need to boot another OS.

Never heard of any of those. Of course, your choice of system is dictated by your needs. It’s silly to think that a single system can fulfill everyone’s purposes.
 
I don’t know, it works wonders for me in software development and data analysis. Those chips are wicked fast for stats and tabular data manipulation…
How's PyTorch working for you? Tensorflow? Keras? Caffe? Unreal Engine? Unity? CoppeliaSim? I don't mean the CPU versions, which are painfully slow.
 
How's PyTorch working for you? Tensorflow? Keras? Caffe? Unreal Engine? Unity? CoppeliaSim? I don't mean the CPU versions, which are painfully slow.

I am not working with any of these tools. My main work environment is R with occasional Stan, and they work extremely well.
 
  • Like
Reactions: MevetS
How's PyTorch working for you? Tensorflow? Keras? Caffe? Unreal Engine? Unity? CoppeliaSim? I don't mean the CPU versions, which are painfully slow.
I'd be interested to see how this plays out in the future. I haven't moved over to AS, yet, but I presume pro versions of Apple's GPUs will compare favorably. I recall seeing that Apple and tf (and, by extension, keras) worked together to get M1 supported from the start.
 
I am not working with any of these tools. My main work environment is R with occasional Stan, and they work extremely well.
Well, one can always make the argument of not using specific tools or that one bought into the wrong system. Here's the problem though, everything I've listed worked perfectly on macOS with GPU acceleration at some point in the past. Maybe not out of the box, but with a few minor tweaks. Apple broke it all on their way to the M1. They created a Tensorflow fork which is a mess with most things not working properly or not at all. So they didn't deliver there either.

I had high hopes for the M-series, amazing chips. But the software or lack thereof makes it a little pointless except for daily tasks like reading/writing with low power consumption. They need to get their act together and deliver the software or open up the system so others do it for them. They've moved back to the PPC days, except it's a little worse. While many things were not available on PPC OS X, we knew what we bought into. With what I listed above, we already had our toys and Apple took them away.

I have no doubt basic R works on the M1, so does Chrome and Firefox. The big question is, how does GPU acceleration work on R on the M1? I don't know the current state of R, but I'd say it doesn't work. In the past when I used it, it relied on things like rpud, which in turn requires CUDA.

All of this could change of course, but I have my doubts it will. macOS and Macs are a niche market for Apple when it comes to profit. They make they money with iPhones, iPads, the watch and services/apps. The M-series is worth it for them, because it works in the Macs and in iPads/iPhones. My guess is, in 5 to 10 years we won't have Macs anymore. We'll have a hybrid OS running on iPads or some version of it and that makes sensor for Apple, because the iPad generates so much money and people won't look out of the Apple sandbox anymore.
 
I'd be interested to see how this plays out in the future.
So am I. They've released (or updated?) a metal plug-in about a week ago. When I find the time, I'll have to dig into it. Overall I was very excited when they forked it. But it's nowhere near there yet and at the speed they're going, it won't be anytime soon. Other are not sleeping either, Nvidia is a giant here dominating everyone. Oddly, not because of their hardware, but their software. I've personally stepped away a little from Tensorflow and more to PyTorch as I find it a lot easier to use these days, particularly for parallelisation.
 
Well, one can always make the argument of not using specific tools or that one bought into the wrong system. Here's the problem though, everything I've listed worked perfectly on macOS with GPU acceleration at some point in the past. Maybe not out of the box, but with a few minor tweaks. Apple broke it all on their way to the M1. They created a Tensorflow fork which is a mess with most things not working properly or not at all. So they didn't deliver there either.

Tensirflow and PyTorch worked on AMD and Intel GPUs? I was under impression they did not offer OpenCL support, so GPU didn’t work since 2016 (or even earlier, I don’t remember when Nvidia was gone).

Apples experimental Tensorflow fork is old history and has been abandoned around half a year ago. Apple ships a tensorflow device backend plugin now. I havent tried it.

All of this could change of course, but I have my doubts it will. macOS and Macs are a niche market for Apple when it comes to profit.

Give it some time. If these new macs end up having workstation-class performance with a portability of an ultrabook, people will want to use them for ML.
 
  • Like
Reactions: BlindBandit
So am I. They've released (or updated?) a metal plug-in about a week ago. When I find the time, I'll have to dig into it. Overall I was very excited when they forked it. But it's nowhere near there yet and at the speed they're going, it won't be anytime soon.
I'd be interested to hear results when you check it out. I'm not currently doing any heavy lifting requiring good GPU support, but new projects are always around the corner.

Other are not sleeping either, Nvidia is a giant here dominating everyone. Oddly, not because of their hardware, but their software. I've personally stepped away a little from Tensorflow and more to PyTorch as I find it a lot easier to use these days, particularly for parallelisation.
I never got into PyTorch -- I debated between PyTorch, TF, and Keras when I first got into ML (before keras and tf got together), and found keras to be more friendly for what I was doing at the time (primarily experimenting with NNs). My recent work uses tf (with keras frontend), but again, not doing any heavy lifting atm (mostly NLP tasks).
 
Tensirflow and PyTorch worked on AMD and Intel GPUs? I was under impression they did not offer OpenCL support, so GPU didn’t work since 2016 (or even earlier, I don’t remember when Nvidia was gone).
Yes, you can use alternative backends for those. PlaidML is one option, others exist.
Apples experimental Tensorflow fork is old history and has been abandoned around half a year ago. Apple ships a tensorflow device backend plugin now. I havent tried it.
Same thing, basic things work, most is broken.
Give it some time. If these new macs end up having workstation-class performance with a portability of an ultrabook, people will want to use them for ML.
Time will tell. Nvidia is the competitor here. And if Apple won't deliver soon, it's game over. Grace is on the way, which I assume will find it's way into laptops. That plus the software tools they provide is a killer combination.
 
My recent work uses tf (with keras frontend), but again, not doing any heavy lifting atm (mostly NLP tasks).
NLP is the one thing I've always found very interesting, but never got into due to other priorities and lack of time. I have two colleagues working in that field though. My primary area of research these days is computer vision / object recognition and quantification of aleatoric and epistemic uncertainties with application in pure vision tasks and robotics/autonomous driving/autonomous drones. Working a lot with real world data and sensor fusion but also in simulation environments (Unreal, Unity, Nvidia tools like Isaac, etc.). Most of this stuff can't have enough power, for pure number crunching it's a push off to A100/V100 clusters.
 
  • Like
Reactions: januarydrive7
NLP is the one thing I've always found very interesting, but never got into due to other priorities and lack of time. I have two colleagues working in that field though.
I've only recently begun integrating NLP tasks into my research. I typically work in programming languages, compilers, and embedded systems.
My primary area of research these days is computer vision / object recognition and quantification of aleatoric and epistemic uncertainties with application in pure vision tasks and robotics/autonomous driving/autonomous drones. Working a lot with real world data and sensor fusion but also in simulation environments (Unreal, Unity, Nvidia tools like Isaac, etc.). Most of this stuff can't have enough power, for pure number crunching it's a push off to A100/V100 clusters.
Your comments regarding GPU acceleration for ML tasks are properly understood, now.
 
Someone on reddit posted this... it translates to "Hold on tight."
 

Attachments

  • Screen Shot 2021-10-14 at 5.18.53 PM.jpg
    Screen Shot 2021-10-14 at 5.18.53 PM.jpg
    207.4 KB · Views: 83
I hope “Unleashed” does refer to intel as suggested earlier in this thread, because that would suggest AS replacements of all intel-based offerings. I’m tempering my expectations… but that would be a really cool event.
 
I hope “Unleashed” does refer to intel as suggested earlier in this thread, because that would suggest AS replacements of all intel-based offerings. I’m tempering my expectations… but that would be a really cool event.

It quite possibly could be. I mean, last year they gave us the 2-year warning that they would be done with Intel.. what that "done" would be is open to interpretation. This year could be the last year that they release new Macs on Intel CPUs, or it could be that this is the last year that they will have any Intel Macs in stock, so starting in November of next year, they will have no Intel products at all.

What we do know, however, is that we have longer than those 2 years for Intel Macs to be supported by MacOS: natively, by Rosetta 2, or otherwise.

BL.
 
It quite possibly could be. I mean, last year they gave us the 2-year warning that they would be done with Intel.. what that "done" would be is open to interpretation. This year could be the last year that they release new Macs on Intel CPUs, or it could be that this is the last year that they will have any Intel Macs in stock, so starting in November of next year, they will have no Intel products at all.

What we do know, however, is that we have longer than those 2 years for Intel Macs to be supported by MacOS: natively, by Rosetta 2, or otherwise.

BL.
Regarding software support, by Apple’s track record, intel machines will be supported for several years.

Fingers crossed for a complete hardware transition, though. They’re pretty agile with stock, so I’d be surprised if that factored in at all— there have also been recent reports of stock shortages.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.