Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
For anyone still confused on the T2 chip with the new Mac chips, this video is very clear that new macs will not have the T2 chip as everything will be on the new SoC

Yep! The T2 simple reproduces functionality that on iOS devices already exists inside the A Series SOC albeit in a separate block (for security reasons).
 
  • Like
Reactions: markiv810
This (and many other) WWDC videos have been linked in numerous threads multiple time already. We probably should take more effort as a community to make discovery of information possible. Maybe a wiki post?
 
So, it's official: no dGPUs.

For now . There is nothing in that video that says that will be the case forever .

The DTK had no possiblity of a third party GPU ( nothing inside and no thunderbolt so no external one possible either) .
the first Mac that transitions is very likely in the same boat internallly . Half the current Mac line up has no dGPU . Developers really need to optimize their code for Apples GPU to perform well. By taking away distractions that don’t matter folks should get more work done , sooner rather than later .

So if the vast majority of the first systems don’t have it why would Apple spends tons of time on it at WWDC 2020 ? There will be WWDC presentations in 2021 and 2022 ( and after ) .

P.S. the party GPU driver probably are going to take much longer to do .
 
Last edited:
I've wondered if Apple couldn't make use of multiple Apple Silicon in a higher performance desktop or workstation? After all you have multiple ARMs in super computers.

Highly unlikely . Apples comments are highly focused on performance/power . More likely they would create different CPU package(s) that contain “more stuff” Than haveto support inter chip package connections .

macOS can’t deal with more than 64 cores . Apple is not chasing ultra maximum core count numbers. That isn’t the point in the smartphones. Not going to be the point in Macs either . IMHO I’d expect the rivised Mac Pro to only top of lower than 32 cores in count . Maybe multiple dies in chiplet style but that is substantively different .


the Mac Pro dumped multiple CPU packages seven years ago . Apple isn’t going back . The norms of the primary parts of the Mac line up with drive the base SoC package design rules .
 
  • Like
Reactions: Unregistered 4U
So, it's official: no dGPUs.

There is a rumor of a dGPU code-named "Lifuka". Whether this is a separate package or an optional GPU chipset that is part of the Apple Silicon SoC remains to be seen. I would expect any future ASi Mac Pro to have separate PCIe boards with some kind of dGPU or accelerator like the Afterburner.

I wasn't aware of the 64-core limit for MacOS, but that should probably be sufficient provided the cores are at least 60-70% of the power of a hyper-thread Intel Xeon core.
 
Last edited:
  • Like
Reactions: Woochoo
There is a rumor of a dGPU code-named "Lifuka". Whether this is a separate package or an optional GPU chipset that is part of the Apple Silicon SoC remains to be seen. I would expect any future ASi Mac Pro to have separate PCIe boards with some kind of dGPU or accelerator like the Afterburner.

It's most likely an IP that can be packaged in multiple silicon formats. For higher-end applications (such as the Mac Pro) I expect the to use a NUMA hierarchy with multiple CPU+GPU boards and fast interconnect. Really curious to see how they solve it anyway.

I wasn't aware of the 64-core limit for MacOS, but that should probably be sufficient provided the cores are at least 60-70% of the power of a hyper-thread Intel Xeon core.

Given how much more power-efficient Apple cores are on lower frequencies, outperforming the Xeons is only the question of core packaging and interconnect (neither of which are trivial of course). But assuming Apple can solve the practical issues — and they have enough talent and money to do that — outperforming Xeons is not going to be a big problem for them. A 2.5ghz Apple core already offers similar performance to a 4.0ghz Xeon core while consuming as much power as the same a Xeon core running at around 2.0 ghz. Basically, a 16+8 core Apple CPU with the TDP around 100 watts would much (give or take) multi-core sustained performance of a current 28-core Xeon. An Apple workstation CPU with 32 cores — assuming Apple can make one — would be far beyond Intel's ability.
 
For now . There is nothing in that video that says that will be the case forever .



P.S. the party GPU driver probably are going to take much longer to do .

Agree. One of the interesting bits of the talk was the work of isolating devices from each other for more secure DMA at the hardware level.

Not much point to reminding folks about making sure to fix their drivers unless PCIe devices with third party drivers are still going to be a thing. iOS devices also use this architecture, sure, but it’s all on Apple’s side of things and not relevant to developers.

As for your PS comment though, I’m not really sure that’s the case here. On one hand, they are using an iBoot-derived boot process, and so EFI/BIOS GPU firmware isn’t likely to be all that useful during boot. On the other though, the new startup experience is looking like it’s able to happen further in the boot process than before, with much richer functionality. So there’s a possibility (which I admit is speculation on my part) that they are going straight for the OS driver rather than relying on EFI drivers. Which if they’ve pulled off, actually helps solve the boot screen issues with eGPUs, which is a very intriguing idea.

Either way, once the OS has booted and loaded the PCIe driver for a GPU though, there shouldn’t be huge differences between the OS driver for Intel vs ARM, much like there wasn’t huge shifts between PPC and Intel outside of the pre-boot firmware the cards needed to have for boot screens. The individual drivers are already isolated from the differences in how a PCIe bus is exposed to the kernel, and the graphics stack isn’t changing drastically here.

Also, with AMD being the only partner Apple has to worry about for dGPUs and eGPUs, the set of drivers that would need to be validated and bug fixed are surprisingly manageable. Polaris, Vega, and Navi. I could see them ignoring Polaris in favor of Big Navi depending on timing of things.
 
  • Like
Reactions: 2Stepfan
Agree. One of the interesting bits of the talk was the work of isolating devices from each other for more secure DMA at the hardware level.

This is most likely about security issues with Thunderbolt interface. If I understand it correctly, Intel chips's IOMMU is flawed which allows DMA exploits such as Thunderclap. Apple has mot likely designed their own controllers that blocks these attacks.


Not much point to reminding folks about making sure to fix their drivers unless PCIe devices with third party drivers are still going to be a thing. iOS devices also use this architecture, sure, but it’s all on Apple’s side of things and not relevant to developers.

Of course they are a thing. A substantial part of WWDC was dedicated to DriverKit. It's a bit of a controversial thing actually, since Apple is moving all third-party drivers to user space. There are concerns about performance. However, it is possible that their ARM CPUs will contain hardware that allows fast crossing between the kernel and the userland.

Also, with AMD being the only partner Apple has to worry about for dGPUs and eGPUs, the set of drivers that would need to be validated and bug fixed are surprisingly manageable. Polaris, Vega, and Navi. I could see them ignoring Polaris in favor of Big Navi depending on timing of things.

If the information released by Apple so far is accurate, chances to see AMD dGPUs on Apple Silicon Macs are slim to none. I also wouldn't be too hopeful about the eGPU support. I doubt that Apple will invest the effort in rebuilding all their AMD drivers for ARM jut for the handful of eGPU users...
 
I firmly believe that for mainstream computing (aka the majority of what the consumer market space encompasses) dedicated GPUs will meet the same fate that serial controller cards, dedicated storage controllers, and network controllers met many years ago. Integrated GPUs will eventually get to a point where they are fast enough for a huge majority of users, and dedicated chips will subsequently be relegated to niche markets and special use cases, such as competetive gaming, professional video and photo editing, GPGPU applications, and last but not least AI.

At the end of the day it's all a matter of performance. Once integrated features work well enough to satisfy most demands supplying dedicated chips to perform the same task becomes sort of a moot point.
 
I've wondered if Apple couldn't make use of multiple Apple Silicon in a higher performance desktop or workstation? After all you have multiple ARMs in super computers.
This may sound super funny to the people with knowledge, but if they increase the size of the chip they should get more power, right? Like make it 3 times bigger than an iPhone A chip and get 3x the performance of the smaller chip?
 
This may sound super funny to the people with knowledge, but if they increase the size of the chip they should get more power, right? Like make it 3 times bigger than an iPhone A chip and get 3x the performance of the smaller chip?
You could get more transistors with a larger die. This could be used to add additional CPU or GPU cores or more cache. More cache could possibly speed up both CPU and GPU performance.

There wouldn’t be a linear relationship between the size of the die and performance though. A 3x die wouldn’t be 3x the performance. But it almost certainly would be over 3 times the cost to manufacture because of yield. Larger dies will necessarily have lower yields.
 
  • Like
Reactions: rafark
Other than the high level overview in the keynotes, Apple hasn't given us much on virtualization (the Linux arm64 kind). I hope it's ready with the first ASi macs.
 
So, it's official: no dGPUs.
I'm curious to see the future of Mac Pros if this is the case? Apple introduced MPX slots with the latest Mac Pro. I wonder how that will transition to Apple Silicon? Also, I wonder if Apple will create something like Nvidia's NVLink? I don't see it as that's probably too niche, but who knows?
 
Highly unlikely . Apples comments are highly focused on performance/power . More likely they would create different CPU package(s) that contain “more stuff” Than haveto support inter chip package connections .

macOS can’t deal with more than 64 cores . Apple is not chasing ultra maximum core count numbers. That isn’t the point in the smartphones. Not going to be the point in Macs either . IMHO I’d expect the rivised Mac Pro to only top of lower than 32 cores in count . Maybe multiple dies in chiplet style but that is substantively different .

the Mac Pro dumped multiple CPU packages seven years ago . Apple isn’t going back . The norms of the primary parts of the Mac line up with drive the base SoC package design rules .
I still haven't seen just how many cores are needed in Apple Silicon to approximate the same GPU experience of the Radeon Pro 5700 XT in the high end iMac? Their full size card says 2560 cores. I know the iMac is using the mobile edition, still it makes one question the simplicity of graphic related cores in iPad Pro's ARM compared to high end iMacs processor and GPU. I am not being negative here, just questioning what it takes to be equivalent? 🙂

How fast are Apple’s new ARM Mac chips? It’s hard to tell - The Verge

But are Apple’s ARM chips actually powerful enough now to replace the likes of Intel and AMD? That’s still an open question — because at Apple’s 2020 Worldwide Developers Conference (WWDC), the company shied away from giving us any definitive answers.
 
Last edited:
As of right now I would assume the Mac Family of Apple Silicon will be based on A Series but the core counts and cache amounts will both be increased. So instead of the current 2-4 (Big/Little) arrangement we will likely see something like 8-6 with the attendant increases in cache. GPU -wise I wpuld again expect them to base it on the A14 GPU but at least double the core count.
 
I still haven't seen just how many cores are needed in Apple Silicon to approximate the same GPU experience of the Radeon Pro 5700 XT in the high end iMac? Their full size card says 2560 cores. I know the iMac is using the mobile edition, still it makes one question the simplicity of graphic related cores in iPad Pro's ARM compared to high end iMacs processor and GPU. I am not being negative here, just questioning what it takes to be equivalent?

From what I gathered (I still need to do more tests though), an Apple GPU core and an AMD GPU core are fairly similar in their FLOPS per clock. Both can do up to 64 FP32 MADD operations. The 5700XT contains 40 cores (compute units). Apple A12Z for example contains 8. If Apple were to scale it up to 40, they should be able to match or exceed the 5700XT in compute performance. Note that Apples GPUs are significantly faster (everything else being equal) for graphics since they approach rendering in a more efficient manner.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.