Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I know its a long shot, but my dream is an Apple Arcade+ with AAA games (unleash the gaming potentiel). I have high hopes for the performance of the new AS chip, but Im not a videoeditor so the chip will be totally overkill for my needs as I cannot game on it.
 
Apple acquired PA Semi 13 years ago, and PA Semi's co founder 'was the lead designer of the DEC Alpha series of microprocessors, a series of power-efficient ARM microprocessors' (according to Wired). The 150 chip designers at PA Semi have more or less spent these years on designing chips for iOs devices and Macs, and/but the Mac side of this probably took longer than they expected.

'Unleashed' could simply be a reference to the fact that they (since the release of the M1) finally have let their biggest cat (so far) out of the bag.

Or maybe they just use 'unleash' to bump Apple Silicon threads in forums like this. :)

 
  • Like
Reactions: BlindBandit
A few guesses:

• It’s a jab at Intel. The Mac is finally going to be able to show its power “unleashed” and not constrained by Intel.

• It’s a reference to a higher M-series chip. Suggesting M1 is the “tame” version, and they’re going to unleash the beast with an M1X.

• Battery life is so good, you are not “leashed” to a power cable.

• Or it’s just fun marketing hinting that Apple is finally going to unleash their full potential.

Good marketing copy often has multiple meanings. So my guess is they had a few things in mind.
Spot on, agree. I think that the main message will be:

‘These are the professional MacBooks that we always wanted to make - but couldn’t until we made our own chips.

So for the first time, their true potential is unleashed

(And btw Intel suck - look at these performance and battery life comparison graphs)’
 
  • Like
Reactions: BlindBandit
Yes, you can use alternative backends for those. PlaidML is one option, others exist.

I though PlaidML has a Metal backend that works on M1 devices (e.g. https://github.com/plaidml/plaidml/issues/1635)?

Anyway, all of this is about having passionate community members that get involved in the FOSS project. It often takes just one or two people who are knowledgeable and invested in the community to get things going. For example, Haskell (GHC) now has full support for Apple Silicon (and it was not easy at all!), simply because a few people like using Macs and so they did it.

I am quite sure that once these new Macs are released and if they have excellent performance, some people will start doing experiments and others will want to use them too. As the interest will rise, it is only a question of time until someone supplies a series of patches that make PyTorch and other tools play with Apple Silicon... hell, I would do it myself if I was invested in these frameworks.

BTW, another example: MoltenVK. One often hears how porting games to Mac is not profitable and how many problems there are, and yet there is an entire business entity that oversees a large FOSS project dedicated to building a Vulkan implementation for Apple.

Time will tell. Nvidia is the competitor here. And if Apple won't deliver soon, it's game over. Grace is on the way, which I assume will find it's way into laptops. That plus the software tools they provide is a killer combination.

Grace is a super computer product, it will never be shipped in consumer devices. Or do you think Nvidia will be interested in building an ARM Linux laptop with a cut down Grace for data scientists? That sounds veeeery niche...
 
Apple removed nvidia support long ago so CUDA is a no-go on macOS for a long time.
Doesn't matter though. Professionals will buy hardware/tools to solve problems and get things done. And if that means they'll have to switch from macOS to Linux or Windows, that's what's going to happen as long as these platforms provide the necessary tools. Nvidia has a solution for pretty much everything, graphics, medical applications, robotics, physics, you name it, they have it. And it makes the job much, much easier. Everyone else has... a little tech demo in comparison. Apple fans are loyal, more than others. However, we're already seeing many people leaving Apple behind right now. I personally hate Windows with a passion, Linux is a must have in my field of research. Ever since G4 came out Apple delivered my primary drivers. Before I used Apple products dating back to around the mid 80s, I can't really remember. At this point I don't care anymore. I'm using my Macs for reading/writing, cutting a video here and there and Capture One for my photography work (hobby). If you would have asked me early 2020 that would ever happen, I would have told you hell no, not in a million years, yet here we are.
Wait, you've tried that? That requires Monetary to work.
You need 12.0+ for the tf-metal pluggabledevice. But tf-metal still requires you to use tf-mac which does not require 12.0. Base in this is still broken. One of my students used it... for a short amount of time... back to Ubuntu + Nvidia GPU.
I hope “Unleashed” does refer to intel as suggested earlier in this thread, because that would suggest AS replacements of all intel-based offerings. I’m tempering my expectations… but that would be a really cool event.
That would mean we'll see a new MacPro. Doubt that's going to happen unless the MacMini Pro is the new MacPro. And if that is the case, it'll be a new "trashcan", not upgradeable, no PCIe, etc. People won't be happy. I think the new MacPro will be introduced at WWDC next year with a fall release.
As the interest will rise, it is only a question of time until someone supplies a series of patches that make PyTorch and other tools play with Apple Silicon... hell, I would do it myself if I was invested in these frameworks.
Maintaining this stuff on your own is of course an option, but with the amount of tools required, plus porting results from other research groups, that means hiring people for maintenance work. To much work involved. It's easier to switch over to something else. Even if it would be Windows. I've been primarily on Linux for a little while now and surprisingly I'm not missing much. Sure, I have no DevonThink for syncing between iPads and macOS, I have no native Capture One, no Final Cut, no Bookends, I switched from Evernote to Joplin. I'm missing TexPad a little, as well as Highlights app and LiquidText. While Capture One / Final Cut remains on macOS, surprisingly I can get things done on Linux as well, everything. Sure, the email client and calendar don't look as nice as in macOS. And neither does working with Latex, but it works, no problem there. Again, we'll see what happens in the future. Until then people in these fields will switch, they might come back, or not. Time will tell.

The best that could happen to us would be full Linux support from Apple. Either in a VM or even better native with dual-boot. And with full, I mean eGPU support. How awesome would that be to have a powerful M-series chip, run Linux in a VM and a Nvidia GPU passed through to the VM. Hook up 3 or 4 monitors, two with Linux, two with macOS at the same time and boom, problem solved.
Grace is a super computer product, it will never be shipped in consumer devices. Or do you think Nvidia will be interested in building an ARM Linux laptop with a cut down Grace for data scientists? That sounds veeeery niche...
Scaled down of course. This thing will go into the next generation Jetson boards, so they might as well bring it to a laptop. Or just sell them to anyone to make their own products, which of course will run ARM Windows. If you break it down, Grace is a custom ARM CPU. So essentially Nvidia's version of the M-series chips. The difference is, they're willing to sell to Dell, Lenovo, HP and others for custom products. How good it will be, remains to be seen.
 
  • Like
Reactions: januarydrive7
The best that could happen to us would be full Linux support from Apple. Either in a VM or even better native with dual-boot. And with full, I mean eGPU support. How awesome would that be to have a powerful M-series chip, run Linux in a VM and a Nvidia GPU passed through to the VM. Hook up 3 or 4 monitors, two with Linux, two with macOS at the same time and boom, problem solved.

Wait, that's actually a great idea! Didn't anyone try to use an eGPU with Linux in Parallels? It's a thunderbolt device, and as far as I know macOS virtualization allows PCIe passthrough to a client VM. I don't see any reason why you should not be able to run CUDA with an eGPU on a Linux VM.
 
  • Like
Reactions: januarydrive7
Wait, that's actually a great idea! Didn't anyone try to use an eGPU with Linux in Parallels? It's a thunderbolt device, and as far as I know macOS virtualization allows PCIe passthrough to a client VM. I don't see any reason why you should not be able to run CUDA with an eGPU on a Linux VM.
Doesn't work in Parallels (or VMWare) from what I know, at least not the last time I tried. It's been a requested feature on the Parallels support forum, but was always "ignored" by the devs.

I don't think it is that easy and requires the right hardware. From my little experience with Proxmox, you can only pass through IOMMU groups, not individual devices. That is problematic because a device (and therefore the whole group it's in) is either used by the host OS or passed through to the guest OS. It can't be shared. So when a device is in a group with another device, they both would either be passed through or not. In turn this means that the port the eGPU is connected to would have to be the only device in the IOMMU group or Apple could split left-/right side ports and then one side would be for macOS, the other for the VM. Some hardware allowed ACS override, which would also have to be in the kernel of the host OS, then it was possible to split a device from the existing IOMMU group and put it into it's own to make it work. If Apple's hardware design allows for this (I don't know) and macOS supports it as a host OS (I don't know), then it would be possible. It certainly is within their ability to provide these features, but if they actually will remains to be seen. I'd be first in line for a new fully loaded MBP and MacMini Pro / MacPro if they make this happen.
 
Doesn't work in Parallels (or VMWare) from what I know, at least not the last time I tried. It's been a requested feature on the Parallels support forum, but was always "ignored" by the devs.


This is annoying. I would expect this to work 100% As I said before, I think there are good reasons (for Apple and for the health of the ecosystem) to not support third-party GPUs under macOS, but these limitations should not apply to a Linux VM. In fact, it would be beneficial to Apple if their hardware would be flexible enough to support these user cases.

I don't think it is that easy and requires the right hardware. From my little experience with Proxmox, you can only pass through IOMMU groups, not individual devices. That is problematic because a device (and therefore the whole group it's in) is either used by the host OS or passed through to the guest OS. It can't be shared. So when a device is in a group with another device, they both would either be passed through or not. In turn this means that the port the eGPU is connected to would have to be the only device in the IOMMU group or Apple could split left-/right side ports and then one side would be for macOS, the other for the VM.

The bits and pieces hackers have figured out about Apple's IOMMU implementation suggest that it's fairly advanced and offers per-device barriers. Also, on Apple Silicon each port is driven by a separate channel, there are no shared ports. However, since my practical knowledge of virtualization technology is basically zero, I am just talking randomly here. I would be curious to know if ARM has corresponding virtualization technology and whether Apple supports it in their virtualization frameworks.
 
i dont know how you guys are missing this but its simple, solar panels. The entire top will be solar panels so you will be unleashed meaning never tied down to charging from an outlet again. /s
 
Face ID.... Dot Projector?
 

Attachments

  • Unknown.jpeg
    Unknown.jpeg
    15 KB · Views: 48
Regarding software support, by Apple’s track record, intel machines will be supported for several years.

Native Intel Support, yes, I can see that lasting a while. However, I can see them dropping Rosetta 2 long before dropping native Intel support. That is a big difference, especially if some software isn't going to be updated because the developers of that software are moving to a completely different format as well as pricing model for that software. If an alternative can't be found before Rosetta 2 goes away, some people may be stuck.

Fingers crossed for a complete hardware transition, though. They’re pretty agile with stock, so I’d be surprised if that factored in at all— there have also been recent reports of stock shortages.

That's what I'm thinking. I wouldn't be surprised if only AS models of this new hardware is coming out; meaning, no Intel Macs at all at this event.

BL.
 
  • Like
Reactions: altaic
Native Intel Support, yes, I can see that lasting a while. However, I can see them dropping Rosetta 2 long before dropping native Intel support. That is a big difference, especially if some software isn't going to be updated because the developers of that software are moving to a completely different format as well as pricing model for that software. If an alternative can't be found before Rosetta 2 goes away, some people may be stuck.
Rosetta 2 might stick around a lot longer than Rosetta did. Rosetta used licensed IP from QuickTransit (and later IBM) to work, whereas it seems Rosetta 2 was made in-house. I'm betting Apple would have kept Rosetta around longer had it not cost them licensing fees to do so.
 
I say M1X SoC for 14" & 16" MacBook Pro laptops & for a Mac mini that replaces the last standing Intel Mac mini...

M1X SoC
10-core CPU (8P/2E)
Improved Neural Engine
Space Gray

32GB RAM
16-core GPU
512GB SSD
$1999

64GB RAM
32-core GPU
1TB SSD
$2999
 
  • Haha
Reactions: chrisdazzo
I'd be interested to see how this plays out in the future. I haven't moved over to AS, yet, but I presume pro versions of Apple's GPUs will compare favorably. I recall seeing that Apple and tf (and, by extension, keras) worked together to get M1 supported from the start.

You can't rely on OpenCL on M1 since it's been broken since Big Sur 11.5 so apps like hashcat that rely on OpenCL have been broken for months. For reliability, CUDA and OpenCL on Nvidia then AMD are better and much more performant.
 
'Unleashed' IMO, connotes they're pretty self-assured (cocky?) that they'll be surpassing expectations on Monday.

No matter how these perform, I'm hopeful about the future benefits to consumers that this competition brings. No more tick tock +++++.
 
  • Like
Reactions: BlindBandit
You can't rely on OpenCL on M1 since it's been broken since Big Sur 11.5 so apps like hashcat that rely on OpenCL have been broken for months. For reliability, CUDA and OpenCL on Nvidia then AMD are better and much more performant.

That’s a strong claim. Got anything to back it up? The few tools that used OpenCL worked fine on my M1. I quickly checked hashcat issue list and the only mention is that a certain version of hashcat introduced a change that made it inoperablere on M1 machines. There was no reply or analysis.
 
The M1 in the first generation is described by Apple itself as not even trying. Craig said the A12z is powerful, imagine what Apple silicon is gonna be like (paraphrasing here).

What’s coming from Apple on Monday is more like an onslaught. You are gonna get the full enchilada here.

Unleashed in performance, power, speed, battery life, design, not holding back. Basically pushing the limits and then some.
 
  • Like
Reactions: BlindBandit
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.