Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
If you went to Intel or AMD with Photoshop and it's internal routines, gave Intel/AMD all the photoshop code and said, I want you to develop parts of your CPU/GPU to specifically run these functions photoshop does.
I'm sure both companies could create something that would blast thru photoshop funtions at a blazing speed which is totally un matchabale by anything else.

Likewise Apple can code into the chips ways for their own video editing and other functions to run super fast, so you end up with a system/chip that, when it's placed into a device and running these apps is blindingly fast.
One could say that's amazing.
One could also say that's kinda cheating.

Optimizing that way simply hasn’t been common in a while.

Instead, the CPU vendor offers instruction extensions for particularly heavy yet common computations, and different levels of code try to take advantage of them where available: ideally by least likely, at the compiler level (“this looks like an implementation of SHA-1, but the CPU has an optimized one built in, so I’ll use that; often at the framework/library level (“you’re calling macOS’s SHA-1 function; on this modern CPU, it simply uses the built in CPU implementation”); and only in the worst case, in your own code (“I heard Skylake added some SHA-1 instructions; let’s use those when available“).

Even in that last case, you probably don’t need the CPU vendor’s help. And other CPU often eventually catch up with similar instructions. More likely, you’ll wait for the library you’re using (because you l shouldn’t be implementing SHA-1 yourself anyway) to catch up and do that work for you, likely in higher quality because it’s more likely to be battle-tested (the more people use the library, the less likely common bugs stick around, whereas the only consumer of your own code is going to be yourself).

What you’re imagining here is the 1990s’ way where you take entire sections of code in C that are suddenly inline assembler. You can do that, and you’ll have to do it for each arch, but it’s no longer common because of diminishing returns these days.
 
  • Like
Reactions: Piggie
^ What I was thinking of was remember where you would put things into the CPU to draw box's on screens and windows overlapping others.
So your software did not have to do all the heavy work, all that type of stuff was hard coded in the chip.

I guess perhaps like having sections of the chip dedicated to video/audio encoding which is amazingly heavy/hard work to get done and needs to be done in hardware to get any realistic speed.

I guess the trick is knowing what you want to code the chip to do.
What functions are common and widely used enough that you want to dedicate CPU/GPU space to that specific function.
 
^ What I was thinking of was remember where you would put things into the CPU to draw box's on screens and windows overlapping others.
So your software did not have to do all the heavy work, all that type of stuff was hard coded in the chip.

I guess perhaps like having sections of the chip dedicated to video/audio encoding which is amazingly heavy/hard work to get done and needs to be done in hardware to get any realistic speed.

I guess the trick is knowing what you want to code the chip to do.
What functions are common and widely used enough that you want to dedicate CPU/GPU space to that specific function.
There’s nothing in CPUs for drawing boxes on screens or overlapping windows.
 
There’s nothing in CPUs for drawing boxes on screens or overlapping windows.

I'm probably getting confused with GPU's as I recall many many years ago when "Windows" 1st game out there was talk about the GPU handling all the windowing drawing/overlapping ect to take the load of the CPU for handling all this work.

That said, any such common function, which would be slower in software would of course heavily benefit from being put on the chip.
 
What?! No way that’s true. Apple would have signed on for capacity that TSMC is obligated by contract to fulfill. Also report says the majority TSMC 5nm process is going to Apple
Coming to Apple later this year.
 
I'm probably getting confused with GPU's as I recall many many years ago when "Windows" 1st game out there was talk about the GPU handling all the windowing drawing/overlapping ect to take the load of the CPU for handling all this work.

That said, any such common function, which would be slower in software would of course heavily benefit from being put on the chip.
yes, many years ago, bitblit is probably the gpu function you are thinking of.
 
  • Like
Reactions: Piggie
And the iMac does go up to an i9-10910, which… close enough?

Yeah, the processor by itself might not make a big difference. But it's about more than that. I just updated my post with the following:

I should add the other characteristics of the desired headless Mac ("xMac") that aren't available in the Mac Mini, iMac, or iMac Pro are these standard tower features:
1) Full modularity, with replaceable, and thus upgradeable, RAM, SSD's, GPUs, and CPUs (like in the G5). [As opposed to the v. limited modularity of the iMac/iMac Pro.]
2) Slots to accommodate two GPU's, and two or more storage drives.
3) A significantly expanded thermal envelope, enabling the machine to max out all cores and GPUs without significant thermal constraint, and to do so quietly (that's been an issue for the iMac), and also (see no. 2), to accommodate up to two powerful GPUs
[No. 3 *might* be mooted by AS; we shall see.]

The vast majority of Apple customers don’t seem to want the mythical xMac. In fact they don’t want any kind of desktop.

80% of Mac buyers purchase laptops. Another 10-15% (est) buy iMac. That means that between Mac mini, iMac Pro and Mac Pro, there are a very small number of units sold. Splitting that 2-3 million units between four different models instead of three doesn’t seem to be anything Apple is interested in...

That depends. First, the market for the xMac might include those who now buy the iMac, but would prefer to buy the xMac and, say, one or more larger monitors. Second, the market is elastic. With the introduction of the xMac, some who now buy PC towers for the flexibility and modularity, but like MacOS, might go with the xMac. These include those who used the G5, but switched to a PC when the trashcan came out, and can't afford the MacPro.

Neither of us, of course, really knows what the market would be, or what its effect would have on Apple's sales of other products.

regardless of how bad you might want it.
Never said I wanted it; wouldn't fit my use case right now (though it might in the future). Just trying to articulate what those who want the xMac are looking for, and why they want it
 
Last edited:
Thanks. Was asking as I was trying to work out whether a launch of a new iPad Pro/Apple Silicon Mac was possible this year.

Being conservative: 5000 wafers * 450 chips = 2.25m chips pcm.

If that’s roughly accurate, it seems like a November launch might just about be possible - but surely just for either the iPad Pro or the AS MacBook (I’m betting on the latter).

It's definitely realistic to see 5nm chips Q4 2020, Apple is handing TSMC bags of cash to have first take of 5nm which is fair enough. I expect the same thing to happen with 3nm which is looking like it's on track. I'm just surprised Nvidia went with Samsung for their 3XXX series as Samsungs 8nm process is barely competitive with TSMC's N7/N7+ process.

5nm (N5) 2020
5nm+ (N5P) 2021
3nm (N3) 2022

They seem to be able to get a lot out of each step simply through refining the process, given the staggering cost involved with each jump in lithography I'm not surprised to see a more process > optimization approach. Intel clearly takes this to a ridiculous extreme as their 10nm and 7nm process both seem to be struggling at best, failures at worst.
 
  • Like
Reactions: bluecoast
My thought is this.

If you went to Intel or AMD with Photoshop and it's internal routines, gave Intel/AMD all the photoshop code and said, I want you to develop parts of your CPU/GPU to specifically run these functions photoshop does.
I'm sure both companies could create something that would blast thru photoshop funtions at a blazing speed which is totally un matchabale by anything else.

Likewise Apple can code into the chips ways for their own video editing and other functions to run super fast, so you end up with a system/chip that, when it's placed into a device and running these apps is blindingly fast.
One could say that's amazing.
One could also say that's kinda cheating.

It would be like making a car that's only ok-ish, but you've built the car for one specific type of road, and when its on that specific road. OMG it's so fast on that road, nothing else can touch it.
But when on other normal roads, it's nothing special really.

I guess it depends what roads you want to drive on, as to whether thats a great approach or not.

I'd love to think Apple's ARM version will be blazing fast with all software.
My gut tells me, it will be specifically designed to run specific software really well, but will just be ok-ish when presented by other general software.

Not that this is bad thing, and can be great if you can get all the devs to spend the time and money to fully optimise their apps for your specific silicon.

I guess we'll soon see what ARM can do on BIGGER machines that really need more power to run much heavier and full fat software.

Apple have shipped a framework named Accelerate for a few releases that will automatically pick the most optimised implementation for a bunch of vector related functions including leveraging the neural processor functionality if it's available. I don't think the neural processor isn't directly accessible otherwise AFAIK. There are a bunch of other functionality where Apple frameworks will leverage capabilities that aren't otherwise accessible automatically.
 
Yeah, the processor by itself might not make a big difference. But it's about more than that. I just updated my post with the following:

I should add the other characteristics of the desired headless Mac ("xMac") that aren't available in the Mac Mini, iMac, or iMac Pro are these standard tower features:
1) Full modularity, with replaceable, and thus upgradeable, RAM, SSD's, GPUs, and CPUs (like in the G5). [As opposed to the v. limited modularity of the iMac/iMac Pro.]
2) Slots to accommodate two GPU's, and two or more storage drives.
3) A significantly expanded thermal envelope, enabling the machine to max out all cores and GPUs without significant thermal constraint, and to do so quietly (that's been an issue for the iMac), and also (see no. 2), to accommodate up to two powerful GPUs
[No. 3 *might* be mooted by AS; we shall see.]

Yeah, I know what proponents of the xMac want. (Well, to the point that it's consistent, anyway. When Apple had a $2k tower, they wanted a $1k tower; now that Apple has a $6k tower, they want a $2k tower.)

I just don't think Apple is interested.

That depends. First, the market for the xMac might include those who now buy the iMac, but would prefer to buy the xMac and, say, one or more larger monitors. Second, the market is elastic. With the introduction of the xMac, some who now buy PC towers for the flexibility and modularity, but like MacOS, might go with the xMac. These include those who used the G5, but switched to a PC when the trashcan came out, and can't afford the MacPro.

The market has been moving the other direction for twenty years, though. Desktops have been the minority since 2005.

Yes, Apple could potentially make a very exciting new desktop again and shift the needle, but… why?
 
  • Like
Reactions: PickUrPoison
^ What I was thinking of was remember where you would put things into the CPU to draw box's on screens and windows overlapping others.
So your software did not have to do all the heavy work, all that type of stuff was hard coded in the chip.

I guess perhaps like having sections of the chip dedicated to video/audio encoding which is amazingly heavy/hard work to get done and needs to be done in hardware to get any realistic speed.

I guess the trick is knowing what you want to code the chip to do.
What functions are common and widely used enough that you want to dedicate CPU/GPU space to that specific function.

As @cmaier said, that's more likely found in the GPU.

But regardless, yes, like I said, you might find that the CPU vendor adds an instruction extension for this sort of feature. That starts of a dynamic/competitive race:

  • if library devs find the instruction interesting, they'll offer high-level APIs to make it easier to use them.
  • if compiler devs find a way, they'll detect such code and emit assembler with this instruction automatically.
  • if neither happens because it's too obscure, you're on your own.
  • competing CPU vendors will make equivalent instructions, which repeats the cycle.
In the ideal case, you don't have to change any code at all. You just upgrade to a newer version of the library you're already using, or a newer compiler. Then, when the architecture changes, you just compile to a different architecture.

For instance, suppose you need encryption as part of your code, and you use AES. You probably don't want to implement AES yourself; instead, you use the existing implementation from Apple, Microsoft, OpenSSL, whatever.

Now, encryption is slow. Intel proposed AES-NI a while back to make it faster: a set of CPU instructions that are implemented in hardware. AMD adopted it, and ARMv8-A has the "Armv8 Cryptographic extension" as an equivalent.

On the library side, OpenSSL adopted it. So if you were using OpenSSL to use AES, and people got newer CPUs, and you eventually upgraded to a newer OpenSSL, they automatically got a significant performance boost with no effort required on your end, and it works on x86 and ARM. (Also, SPARC and POWER.)
 
  • Like
Reactions: Piggie
You can argue it's silly and no one would spend time bothering to do.
But could you go to a chip designer, and say, here's the software "Geekbench" I want you to study this code, and put some custom silicon design into the CPU to do the exact tasks Geekbench is running?
Not to cheat or give a false result. Simply to do the exact tasks Geekbench uses as fast as possible?
 
Apple generally have their own benchmarks they use and cherry pick data from to get their massive performance improvement numbers from. I'm not sure they're too concerned about ensuring the cross-platform stuff is there as they generally only compare to their own devices when it comes to performance numbers.
 
  • Like
Reactions: Unregistered 4U
Apple generally have their own benchmarks they use and cherry pick data from to get their massive performance improvement numbers from. I'm not sure they're too concerned about ensuring the cross-platform stuff is there as they generally only compare to their own devices when it comes to performance numbers.

Oh yes, the classic "Up to 50% speed gain" and "Up to 100% increase in rendering performance"

That's about as honest as when shops have their "Everything on sale, up to 75% reductions" when you rush down there for bargains and find a set of rubbish cooking pans for 75% off, and everything else is about 5% to 10% if you are lucky)
 
  • Like
Reactions: johngwheeler
Yeah, I know what proponents of the xMac want. (Well, to the point that it's consistent, anyway. When Apple had a $2k tower, they wanted a $1k tower; now that Apple has a $6k tower, they want a $2k tower.)
There's no need for the snark. Besides, it's unfair. Back when Apple offered the G5, in its various incarnations, many were happy to purchase it, at the price at which it was offered. I was one of them. Of course Apple customers will always grumble to some extent about Apple's pricing, but that's true for all of Apple's products.

I just don't think Apple is interested.

That's what I presented at the very start of this--Apple won't build it, because of cannibalization:

Indeed, one of the arguments I've heard for why we'll never see the elusive Mac tower* (aka "xMac") is that it would cannabilize sales from both the iMac and the Mac Pro. [High-end, e.g., Intel i9-10900K, but still consumer-grade.]

My argument has never been that Apple would build it. What I've been contending, with both the original poster with whom I interacted, and subsequently with you, is that none of Apple's current machines meet the requirements an xMac would. Simple as that. And you've not yet presented anything which counters that (indeed, the fact that you finally went ad hominem against those who'd like such a machine suggest to me that you realized you didn't have a substantive argument).
 
Last edited:
Question: When chips are being made, do MacBooks containing those chips come out in the same quarter?

Really hoping for a MacBook release this year, but so far most news is for every other device.
 
You can argue it's silly and no one would spend time bothering to do.
But could you go to a chip designer, and say, here's the software "Geekbench" I want you to study this code, and put some custom silicon design into the CPU to do the exact tasks Geekbench is running?
Not to cheat or give a false result. Simply to do the exact tasks Geekbench uses as fast as possible?

could be done.

isn’t done.
 
none of Apple's current machines meet the requirements an xMac would.

No disagreement there.

(edit)

And I didn't mean to be snarky at you in particular. I just find the "xMac" thing silly. It wasn't going to happen under Jobs, and it's gonna happen even less under Cook.
 
Last edited:
You can argue it's silly and no one would spend time bothering to do.
But could you go to a chip designer, and say, here's the software "Geekbench" I want you to study this code, and put some custom silicon design into the CPU to do the exact tasks Geekbench is running?
Not to cheat or give a false result. Simply to do the exact tasks Geekbench uses as fast as possible?

Well, actual benchmark cheating is a thing.

Optimizing your CPU to running well in benchmarks seems like a lot of work with… what gain, exactly? Looking good in reviews but being not that great in practice?
 
.... track.I'm just surprised Nvidia went with Samsung for their 3XXX series as Samsungs 8nm process is barely competitive with TSMC's N7/N7+ process.

TSMC has relatively (to the older DUV and larger than 7nm fab facitilites ) limited capacity. So pragmaticaly there is going to be a logjam on the DUV faciilities as it is lower to get things off to the next iteration.


Over half of the EUV Twinscan machines are not at TSMC. And there is pragmatically only one maker of EUV fabrication equipement. So if they don't produce the scanner than now more capacity than that. ( where the modified TSMC 7nm follow ons use EUV for a couple of layers that more directly runs into the Twinscan EUV cork. )


Somebody was going to have to "spread around" production loads. Apple is not. ( and going to crank more by moving a substantive chunk of Intel production over to TSMC. The dies for Macs aren't going to get much smaller when Apple moves suppliers so the wafers consumed will be a jump also.) And only a small handful of new machines coming on line each quarter. And Intel isn't really trying to buy as many as they should either ( if production goes up in 2021-22 then Intel will probably soak a substantive chunk of that increase)

Samsung may not be quite as good as a process but Nvidia probably has far, far more ability to crank up volume if demand spikes than AMD can. if it gets into a contest of who can supply demand better then Nvidia could win. [ In the context of a world wide recession due to a pandemic that isn't as likely but when making the decision 3 or so years ago that probably wasn't a major factor. If the economies and demand was really hot right now Nvidia would clearly have the upper hand. AMD has basically painted themselves into a corner. That may work out for them short term if the demand is limited , but there is 'lucky' and then there is 'good'. ]

The much higher margin datacenter A100 GPU is soaking up gobs of TSMC 7nm wafers at much higher profit margins than the 3000 series is ever going to make. That is what Nvidia is doing with their wafer allocation. (make more money).

Nvidia might need the TMSC 7nm ( or some variant or 5nm) for a laptop variant but their MX450 seems to be a decent stop gap for at least several months.


P.S. there is another issue too where the initial demand bubble for the PS5 and Xbox Series X is also going to skew AMD and 7nm wafers starts. Nvidia on Samsung is out of that demand squeeze also.

Samsung's last tweak of DUV at 8nm was a relatively safe bet for the 3000 series. It is desktop so "extra" power just comes out of the wall.
 
Last edited:
TSMC has relatively (to the older DUV and larger than 7nm fab facitilites ) limited capacity. So pragmaticaly there is going to be a logjam on the DUV faciilities as it is lower to get things off to the next iteration.


Over half of the EUV Twinscan machines are not at TSMC. And there is pragmatically only one maker of EUV fabrication equipement. So if they don't produce the scanner than now more capacity than that. ( where the modified TSMC 7nm follow ons use EUV for a couple of layers that more directly runs into the Twinscan EUV cork. )


Somebody was going to have to "spread around" production loads. Apple is not. ( and going to crank more by moving a substantive chunk of Intel production over to TSMC. The dies for Macs aren't going to get much smaller when Apple moves suppliers so the wafers consumed will be a jump also.) And only a small handful of new machines coming on line each quarter. And Intel isn't really trying to buy as many as they should either ( if production goes up in 2021-22 then Intel will probably soak a substantive chunk of that increase)

Samsung may not be quite as good as a process but Nvidia probably has far, far more ability to crank up volume if demand spikes than AMD can. if it gets into a contest of who can supply demand better then Nvidia could win. [ In the context of a world wide recession due to a pandemic that isn't as likely but when making the decision 3 or so years ago that probably wasn't a major factor. If the economies and demand was really hot right now Nvidia would clearly have the upper hand. AMD has basically painted themselves into a corner. That may work out for them short term if the demand is limited , but there is 'lucky' and then there is 'good'. ]

The much higher margin datacenter A100 GPU is soaking up gobs of TSMC 7nm wafers at much higher profit margins than the 3000 series is ever going to make. That is what Nvidia is doing with their wafer allocation. (make more money).

Nvidia might need the TMSC 7nm ( or some variant or 5nm) for a laptop variant but their MX450 seems to be a decent stop gap for at least several months.


P.S. there is another issue too where the initial demand bubble for the PS5 and Xbox Series X is also going to skew AMD and 7nm wafers starts. Nvidia on Samsung is out of that demand squeeze also.

Samsung's last tweak of DUV at 8nm was a relatively safe bet for the 3000 series. It is desktop so "extra" power just comes out of the wall.
I was reading this article at https://semiengineering.com/whats-next-for-euv/ about EUV and chip fabrication. Given that Intel can buy the same EUV Twinscan machines as TSMC (I read they're $120M - $180M each), why is Intel not able to keep up (or, if you prefer, down) with TSMC in device miniaturization?

Does TMSC have better masks and/or better resists? Or is there some other aspect of the production process that is specifically causing issues for Intel? Or is it that Intel's architecture makes it more challenging to reduce device size than is the case for the architecture used in the chips TSMC fabricates?
 
Or is it that Intel's architecture makes it more challenging to reduce device size than is the case for the architecture used in the chips TSMC fabricates?
Intel’s main goal is to be as backwards compatible as possible to avoid leaving room for competition. As a result of this, they have to bring forward even all the bad and buggy ideas from prior years. However, anyone using ARM are generally not concerned with it being compatible even with other ARM processors. I’d guess that most of the ARM chips out there are actually running embedded code that will ONLY run in that environment. That’s what TSMC are building for Apple. I would guess that if they had to deal with something as complex as an Intel chip, they’d be having the same issues.

Intel could, overnight, decide to streamline their processor so that it’s not as backwards compatible BUT realizes amazing performance gains as a result.... and would likely have their lunch eaten by AMD from all the folks that JUST want more of the same, just a little faster.
 
  • Like
Reactions: theorist9
Intel’s main goal is to be as backwards compatible as possible to avoid leaving room for competition. As a result of this, they have to bring forward even all the bad and buggy ideas from prior years. However, anyone using ARM are generally not concerned with it being compatible even with other ARM processors. I’d guess that most of the ARM chips out there are actually running embedded code that will ONLY run in that environment. That’s what TSMC are building for Apple. I would guess that if they had to deal with something as complex as an Intel chip, they’d be having the same issues.

Intel could, overnight, decide to streamline their processor so that it’s not as backwards compatible BUT realizes amazing performance gains as a result.... and would likely have their lunch eaten by AMD from all the folks that JUST want more of the same, just a little faster.
Thanks. But why would a more complex chip be more difficult for an EUV machine to draw? I would think the machine would be blind to the complexity, since it's just programed to burn here, don't burn there.

Granted, Intel has this Foveros 3D chip stacking technology, but I don't think that's the sort of complexity to which you're referring.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.