Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I wonder if they will stop listing the CPU core count and clock speed when they release the first Apple Silicon Macs?

Currently they list the specs of the Intel CPUs, but they never promote the core count or clock speed of their iOS devices (as seen in the screenshots attached).
 

Attachments

  • Screen Shot 2020-10-13 at 3.07.20 pm.png
    Screen Shot 2020-10-13 at 3.07.20 pm.png
    273.3 KB · Views: 130
  • Screen Shot 2020-10-13 at 3.08.15 pm.png
    Screen Shot 2020-10-13 at 3.08.15 pm.png
    476.2 KB · Views: 94
I wonder if they will stop listing the CPU core count and clock speed when they release the first Apple Silicon Macs?

Currently they list the specs of the Intel CPUs, but they never promote the core count or clock speed of their iOS devices (as seen in the screenshots attached).
They actually do promote the core count of their a-series processors. Just not on that page you are looking at.

For example:

 
  • Like
Reactions: CarlJ
Hopefully not too OT, but I hope Apple simplifies the lineup. Do we really need a regular model, an "Air" model, and a "Pro" model? To add insult to injury, do they need to release different sizes of the same line at different times to fracture the same lineup even more? I just wish we could have a regular model built for thinness, and a "Pro" model built a bit thicker to help with beefier CPU/GPUs & bigger battery. Do maybe 2 of each, released at the same time. Only difference between, say a 13" & 16" MBP is just that: one has a 13" screen, the other has a 16" screen. Maybe allow some customization of amount of RAM & SSD.
The three models are for profit margin optimization, not for convenience. The Pro models, for those who don't put cost first, will cost the most they think the market will bear with max specs. The regular model has lower specs and lower price. The low-end model is lower priced for those that are really price sensitive, but hobbled to make it not attractive to those that can afford regular. Even more models could maximize it even more, but then you would get decision paralysis and sales would suffer. And then, of course, more overhead for production of more models.
 
  • Like
Reactions: citysnaps
I thought about that a bit... On a desktop machine you don't care much about energy use. Laptops, some people don't care about energy use (at most an hour battery use on the train), some definitely do (on the road all day). So I could see two different designs with say 6 fast cores: One with 6x3.5 GHz, one with 3x3.5 GHz and 3x2.0 GHz. 3x2.0 GHz is not too slow and would take a lot lot less energy. So you would have one processor for maximum desktop power, and one that is reasonably fast for a very, very long time and still has considerable power either for a short time, or when plugged in. (My calculation is that 3.5 GHz takes three times the energy of 2.0 GHz).
Energy no, but sound matters a lot. The ridiculous wattage used to get performance generate heat the must be vented out. Who do not want a completely silent machine under full load? Also who complains about a lower electricity bill? On the global scale, the billions of CPU/GPU used makes a larger energy footprint. Reducing that several fold is desirable.
 
  • Like
Reactions: Roode
Yep my iPad Pro 2018 blitz s my Mac pro 16 in 4k editing so much that I edit all my drone and go pro footage on the iPad over the Mac! I’m really looking forward to the a14x or what ever the chip will be called

Wait really? Have you compared the quality of export though? I am assuming a Mac Pro will do a much higher quality 4k export in less time than the iPad Pro can do in the same amount of time.
 
Wait really? Have you compared the quality of export though? I am assuming a Mac Pro will do a much higher quality 4k export in less time than the iPad Pro can do in the same amount of time.
the only one I can really compare like for like is Adobe premiere rush as the others I have are either ipad or mac specific. On rush def faster editing on ipad pro and export quality no different
 
Damn man, LumaFusion it would be?
I do still need the full ease of use and layering better provided by a multi screen, high ram desktop, however I have heard of those comparisons... some codecs and resolutions (4K and above) can be actually real-time on iPad Pros and not on iMac or Mac Pros. Mind blowing.
lumafusion is my fave I also use adobe rush . looking forward to when they finally put final cut pro on the ipad
 
  • Like
Reactions: amartinez1660
Actually, that's incorrect. Docker on MacOS has a VM that it instantiates. You're not running your container on MacOS, you're running your container inside that VM.

Uhhhh wha?

The container is the (lightweight) VM.

That's true for many other toolsets; behind the scenes they're instantiating a Virtual Box/VMWare-based VM.

It's unfortunate, but true. Heck, you can't run an ARM container on MacOS/x86; it doesn't work that way. You'd have to get an ARM emulator and run docker-on-ARM inside it (is there docker-on-ARM?).

No idea what you’re on about. Right now, you can run a Linux/x86 container on macOS/x86. For almost anything you’d do with that, you can just swap both of those with ARM. No emulation needed.

And yes, there’s Docker on ARM. Apple even mentioned it.
 
I would be so down for a documentary, tech story piece or similar to walk me through how all of this happened... how ARM came to be from underpowered and efficient devices to basically what seems to be a silver bullet in computation.
How come Intel didn’t see it coming or if they did why didn’t steer the boat or if there’s anything to be done actually...

Part of it is they missed the boat. Another part is thin margins, especially for Android. They did pass a chance at being Apple’s vendor, though.
 
The first 6 cpu cores, impressive as they are, do seem to dominate the discussion.

What fascinates me is those 16 other cpu cores. The neural engine is now the most powerful part of this SoC and has come from nonexistent, to a neat bit-part player, to the single largest part of the latest A-series chip.

The computing power and flexibility provided by the neural engine is simply astonishing and yet I have precious little understanding of what it can do and what it will enable in the near future.

Setting aside the 4-cores of the GPU for the moment, the technology threshold crossed with 22 cores arranged in a 16+4+2 format suggests that those first 16 cores are no longer a bit part player in the design.

Interesting times.
 
mannyvel said:
Actually, that's incorrect. Docker on MacOS has a VM that it instantiates. You're not running your container on MacOS, you're running your container inside that VM.
Uhhhh wha?

The container is the (lightweight) VM.

mannyvel said:
That's true for many other toolsets; behind the scenes they're instantiating a Virtual Box/VMWare-based VM.

It's unfortunate, but true. Heck, you can't run an ARM container on MacOS/x86; it doesn't work that way. You'd have to get an ARM emulator and run docker-on-ARM inside it (is there docker-on-ARM?).
No idea what you’re on about. Right now, you can run a Linux/x86 container on macOS/x86. For almost anything you’d do with that, you can just swap both of those with ARM. No emulation needed.

And yes, there’s Docker on ARM. Apple even mentioned it.

Mannyvel is correct - On non-Linux systems such as MacOS and Windows, Docker runs inside a VM created on a local hypervisor built into the OS - HyperKit in the case of MacOS and Hyper-V on Windows.

Docker requires a Linux kernel, which of course neither MacOS nor Windows use natively, but are able to run via a virtual machine.

Apple Silicon will support virtualization (according to Apple's presentations), but this will run ARM versions of guest OSes, such as the various versions of Linux for ARM, as demoed at WWDC.

For Docker containers, you should be able to run native ARM-based Docker containers directly - i.e. a version of Docker (and containers) that have been built for ARM. If you want to run x86 Docker containers you would need to run some kind of emulation layer such as QEMU for them to be run on Apple Silicon.
 
They provide entertainment to likely millions of people, which especially during this time is needed.

So do “camwhores” and “models” on OnlyFans, but nobody is trying to put them on a pedestal 😉 And they often make even more money and entertain more people.

Stop making stupid people famous.

The real problem is the gamer dude is making a completely irrelevant and pointless argument, Apple was never a gamer oriented company. Even with a 15” MBP with the most expendive guts I get Playstation quality assuming I even find a game for MacOS.
 
  • Haha
Reactions: CarlJ
I don't see whats special about this chip. Its more powerful, yes, but thats how every new chip Apple released was since the original iPhone. Why do they make this sound like a revolution in tech?
 
Mannyvel is correct - On non-Linux systems such as MacOS and Windows, Docker runs inside a VM created on a local hypervisor built into the OS - HyperKit in the case of MacOS and Hyper-V on Windows.

Yes, and on Linux, it uses stuff like LXC. It's still virtualization. Not the heavy-duty virtualization like VMware, but still.

Apple Silicon will support virtualization (according to Apple's presentations), but this will run ARM versions of guest OSes, such as the various versions of Linux for ARM, as demoed at WWDC.

For Docker containers, you should be able to run native ARM-based Docker containers directly - i.e. a version of Docker (and containers) that have been built for ARM. If you want to run x86 Docker containers you would need to run some kind of emulation layer such as QEMU for them to be run on Apple Silicon.

Right.
 
AMD is laughing their butts off at the amateurs at Apple. Serious people like me need serious CPUs and GPUs for serious productivity. I’m pwning people at Overwatch all day and night on the highest settings.
lol just wait until Apple scales up what they've been making for desktop class chips. We're talking about what's possible in a fanless mobile device.
 
I would be so down for a documentary, tech story piece or similar to walk me through how all of this happened... how ARM came to be from underpowered and efficient devices to basically what seems to be a silver bullet in computation.
How come Intel didn’t see it coming or if they did why didn’t steer the boat or if there’s anything to be done actually... why RISC made sense but then CISC and then back again to a RISC like architecture.
I’m guessing we still have to see what’s really going to happen, I don’t think we can write off Intel just like that. And have they pointed to any hints of feeling threatened? Because I haven’t seen anything.

So many questions.

It would certainly be a fascinating documentary, atleast for the likes of us. I do a lot of investing in this area and also have an MSc in EEE so I'm pretty acutely tuned into what is happening and where things are going. Even I admit to being somewhat ignorant of RISC vs CISC before about 2010.

As for Intel, they are the 800lb chipzilla so you can't ever write them off. But the ground is shifting from under them. They got extremely complacent when AMD was simply offering zero competition during their Bulldozer era when their chips were just junk. They make their real money from server chips and AMD is coming in like a fat guy at a buffet to gobble up things from one side and then you have ARM designs coming of age and beginning to eat them from below with more and more powerful and power efficient designs. Intel cannot compete with AMD - which if you told me 5 years ago I wouldn't have believed you. AMD's chiplet design offers better performance (more cores) for less money on a better lithography process thanks to spinning off GloFo and going all in with the leader - TSMC (who I also own stock in). They can't do much except watch their golden goose get eaten until they can bring out competitive chiplet designs - and by the looks of things, they might even have TSMC begin fabbing their chips.
Zen 3 was the killer blow, now AMD hold the multicore performance crown, single core performance crown, likely the power efficiency performance crown and they can do it for less money than Intel's monolithic chips cost to make - and they haven't even moved to TSMC's 5nm process yet. I've seen that TSMC's 5nm process has surprisingly good yields for such a new process.

I think in the end we will see a much more competitive market, with AMD commanding a very healthy share and Intel finally waking up and competing again. I just hope AMD can began to work some of that magic on their GPU lineup which has struggled for some time. Nvidia has grown far too greedy and anti-competitive, a bit like Intel a few years ago to be honest.

I just hope Nvidia owning ARM doesn't mean they begin to use that as an anti-competitive tool in terms of licensing etc. I wish ARM was open 'source' like RISC-V. I believe Apple have a permanent license for ARM so they don't have to worry. I just know how unethical Nvidia can be - even if I've been buying their chips since the FX 5600.
 
  • Love
Reactions: amartinez1660
It would certainly be a fascinating documentary, atleast for the likes of us. I do a lot of investing in this area and also have an MSc in EEE so I'm pretty acutely tuned into what is happening and where things are going. Even I admit to being somewhat ignorant of RISC vs CISC before about 2010.

As for Intel, they are the 800lb chipzilla so you can't ever write them off. But the ground is shifting from under them. They got extremely complacent when AMD was simply offering zero competition during their Bulldozer era when their chips were just junk. They make their real money from server chips and AMD is coming in like a fat guy at a buffet to gobble up things from one side and then you have ARM designs coming of age and beginning to eat them from below with more and more powerful and power efficient designs. Intel cannot compete with AMD - which if you told me 5 years ago I wouldn't have believed you. AMD's chiplet design offers better performance (more cores) for less money on a better lithography process thanks to spinning off GloFo and going all in with the leader - TSMC (who I also own stock in). They can't do much except watch their golden goose get eaten until they can bring out competitive chiplet designs - and by the looks of things, they might even have TSMC begin fabbing their chips.
Zen 3 was the killer blow, now AMD hold the multicore performance crown, single core performance crown, likely the power efficiency performance crown and they can do it for less money than Intel's monolithic chips cost to make - and they haven't even moved to TSMC's 5nm process yet. I've seen that TSMC's 5nm process has surprisingly good yields for such a new process.

I think in the end we will see a much more competitive market, with AMD commanding a very healthy share and Intel finally waking up and competing again. I just hope AMD can began to work some of that magic on their GPU lineup which has struggled for some time. Nvidia has grown far too greedy and anti-competitive, a bit like Intel a few years ago to be honest.

I just hope Nvidia owning ARM doesn't mean they begin to use that as an anti-competitive tool in terms of licensing etc. I wish ARM was open 'source' like RISC-V. I believe Apple have a permanent license for ARM so they don't have to worry. I just know how unethical Nvidia can be - even if I've been buying their chips since the FX 5600.
Myself I got a BS in Electronics Engineering 14 years ago and did see a bit of Computer Architecture, but I was too young and naive to really fully grasp what was going on... let alone deeply understand the meaning of PNP or NPN transistors, saturation type or all the signals processing soup I was thrown at at some point. Then after that did NOTHING related with that field, went full on 3D/CG and video games. Regretting it a tiny bit now.

There’s a common trend here, TSMC, these guys and their manufacturing process... seems like they have no competition. Let’s add them too to said documentary.
 
  • Like
Reactions: 827538
Myself I got a BS in Electronics Engineering 14 years ago and did see a bit of Computer Architecture, but I was too young and naive to really fully grasp what was going on... let alone deeply understand the meaning of PNP or NPN transistors, saturation type or all the signals processing soup I was thrown at at some point. Then after that did NOTHING related with that field, went full on 3D/CG and video games. Regretting it a tiny bit now.

There’s a common trend here, TSMC, these guys and their manufacturing process... seems like they have no competition. Let’s add them too to said documentary.

Don’t regret it. I have a Ph.D in EE, did cpu architecture, microarchitecture, logic design, circuit design, and physical design for years.

it’s not that much fun, and schedules are just as stressful as the video game industry :)
 
  • Like
Reactions: amartinez1660
I just hope Nvidia owning ARM doesn't mean they begin to use that as an anti-competitive tool in terms of licensing etc. I wish ARM was open 'source' like RISC-V. I believe Apple have a permanent license for ARM so they don't have to worry.
An interesting point I heard John Siracusa make is that, yes, Apple apparently has a license that lets them use the instruction set “forever” (and they’ve already got better implementations than anyone else), but Apple benefits from having instruction set compatibility with ARM just as they’ve had considerable benefit from having the same x86_64 instruction set as everyone else (e.g. being able to run VMs). If Apple’s license doesn’t cover future updates to the instruction set, and if Nvidia added instructions that became popular with other OS’s (some ARM Linux), then Apple might have to renegotiate that perpetual license, which could pose headaches for them.

An interesting counterpart to that is, presumably Apple could add Apple-specific instructions to the instruction set, to the benefit of macOS (and iOS), which would, as a side effect, make ARM Hackintoshes much more tricky to implement.
 
  • Like
Reactions: 827538
An interesting point I heard John Siracusa make is that, yes, Apple apparently has a license that lets them use the instruction set “forever” (and they’ve already got better implementations than anyone else), but Apple benefits from having instruction set compatibility with ARM just as they’ve had considerable benefit from having the same x86_64 instruction set as everyone else (e.g. being able to run VMs). If Apple’s license doesn’t cover future updates to the instruction set, and if Nvidia added instructions that became popular with other OS’s (some ARM Linux), then Apple might have to renegotiate that perpetual license, which could pose headaches for them.

That's my understanding as well — they can be compatible with ARMv8.x as long as they like, but it's unclear if that extends to the upcoming ARMv9. (It's possible that it does, since Apple is one of the founding companies of ARM, which used to be an Acorn/Apple/VLSI joint venture.)

If they can't negotiate ARMv9 terms, though, I'm not sure that's a huge deal. It'll be years before that's common in other ARMv9 chips, and therefore before compilers are commonly tweaked to take advantage of it. Those lost years could conceivably be used by Apple to fork ARM instead, or to move to something different altogether. (I don't think they'll bother with RISC-V if so. They can just do their own ISA.) Right now? Too risky. But half a decade from now, slowly move to their own thing? Why not.

And as for forking, see below:

An interesting counterpart to that is, presumably Apple could add Apple-specific instructions to the instruction set, to the benefit of macOS (and iOS)

Yup! See, for instance, armv7s — Apple doing their own extensions isn't unprecedented.

, which would, as a side effect, make ARM Hackintoshes much more tricky to implement.

Possibly.

They can do what they've done before with e.g. armv7s and output a fat binary where CPUs that have those extensions take advantage of them, as optimization.

But then they might eventually switch all supported chips to implement that extended ISA, and then drop compiling the regular one altogether, at which point, indeed, you'd have a copy protection of sorts. Due to their relatively long support of hardware, that'd have to be a plan of several years. Unless, of course, they do it right from the start with all ARM Macs, cause there'd be no old ARM Macs to support at that point anyway. We'll know(-ish) if the first batch of ARM Macs ship with a compiler that can target a hypothetical 'arm8mac'.
 
  • Like
Reactions: CarlJ and 827538
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.