Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
But yes, Apple is more impressive than either of the two, at least as far as single-core performance is concerned. In multi-core, Intel seems to do much better so far, and AMD even better. (For example, the A12X has eight cores, but they only deliver 4.14 times the single-core score. The high-end 16-inch MacBook Pro with a 9980HK also has eight cores, and does 6.25 times the single-core score. Techniques like HyperThreading probably help there. But AMD does even slightly better — Renoir's 4900HS seems to average 6.45.)

A12X has heterogenous cores - 4 high performance and 4 low power cores. The later ones are much slower than the former. So your comparison is pretty much meaningless.
 
Runs faster on a ARM N1 AWS server TODAY.

Cross compiling isn't a new thing 2020. We always compiling ARM code on x86 machine.
[automerge]1588028723[/automerge]


Why that matters?

It makes it like you are coding on a machine runs the same CPU on the target.

Remember even Intel CPU have instruction difference and I always turn on AVX512 when compiling for Xeon even on my laptop which totally can not run those instructions.

I've tried compiling the app on a Neoverse 1 server, it doesn't perform as well.

Yes it does matter, Apple has always been a developer friendly ecosystem, and a large portion of MB/P purchases are because the laptops built are so versatile.
 
I've tried compiling the app on a Neoverse 1 server, it doesn't perform as well.

Yes it does matter, Apple has always been a developer friendly ecosystem, and a large portion of MB/P purchases are because the laptops built are so versatile.

Compiling your apps in the cloud has no relation to which CPU architecture your local host has. Also your local host can cross-compile to whatever target archtiecture (e.g. when using clang/llvm)
 
  • Like
Reactions: MikeZTM
And to further add to this conversation -- not only did I not know that Intel is allowing Thunrderbolt on AMD, but Thunderbolt will be irrelevant soon with Thunderbolt and USB 4 being compatible with each other. Which I bet that lead to AMD being able to add Thunderbolt to its MB's.
 
Compiling your apps in the cloud has no relation to which CPU architecture your local host has. Also your local host can cross-compile to whatever target archtiecture (e.g. when using clang/llvm)

Noted, however, I wanted to see the performance Of the remote system when compiling apps.
[automerge]1588043209[/automerge]
And to further add to this conversation -- not only did I not know that Intel is allowing Thunrderbolt on AMD, but Thunderbolt will be irrelevant soon with Thunderbolt and USB 4 being compatible with each other. Which I bet that lead to AMD being able to add Thunderbolt to its MB's.

Is USB4 backwards compatible with Thunderbolt? Not sure why it’s relevant. AFAIK, not all AMD mobos support thunderbolt?
 
Is USB4 backwards compatible with Thunderbolt? Not sure why it’s relevant. AFAIK, not all AMD mobos support thunderbolt?

Well, earlier on I made a comment that ended up not being correct. That Apple wouldn’t switch to AMD because I doubt Intel would license Thubderbolt to their competitor. I was wrong, though, I didn’t know they did that Feb of this year.

And I recently heard that USB 4 was coming this year and I looked into it. It looks like Intel gave their Thunderbolt specs to the organization who controls USB. And they made them compatible with Thunderbolt 3. You can still use your Thunderbolt 3 device on all USB4 ports but I don’t think it works the other way around.

So sorry that’s why I said it was sorta relevant. Just that Apple could easily add USB 4 into their new ARM Mac’s and anyone who had Thunderbolt 3 devices will still work. And any new USB4 devices will also work.
[automerge]1588084799[/automerge]
It is only comparable in Apple's marketing material, according to 3rd party reviews.

It’s not like I went around and read every review or watched a bunch of the Apple ultra expensive monitor. I’m never going to need or even be able to afford that.

But the reviews I did watch they liked it and had nothing but good things to say. It’s the stand they didn’t like. Which is ironic because it cost so much haha.
 
Last edited:
the big thing with the Mac Pro at the moment is the ASIC (afterburner card). Apple really should be making those standard on the Mac Pro.

There is about zero good reason to do that. Two major reasons. First, the card does one and only one thing ( decode the various formats of ProRes), Perhaps there will be a later upgrade so that it will add either encoding ProRes and/or an updated format of ProRes to the mix. But it will still basically be one narrow niche class of things. For Video editors dealing with 6-8K or more than several 4K decodes streams it is a value add, but otherwise it doesn't do that. The Mac Pro isn't a narrow silo computer ( not for A/V use only).

It is not an ASIC card, but pragmatically it is. It is a FPGA chip so it could change, but since Apple solely does calls to it from their A/V core libraries it is effectively an ASIC. And not being "end user" programmable means it comes with whatever modes (and switching between modes) that Apple ships.

Second major reason is cost. Adding $2K to every Mac Pro would be even worse than the effective $6K floor they have put on the device. Even if it got cheaper by buying them in larger bulk $1-1.5K , then that would still have the impact of making the system increasingly non competitive with other workstations for many workloads.

If trying to pitch that Apple would ship a developer enviroment and folks could adapt the FPGA to multiple other usages. Again would add costs.

"Everybody" doesn't need the Afterburner card any more than "everybody" need 20 cores.



If apple can get app developers onboard to support that properly then the big number crunching work doesn't need the CPU so much to get it done.

If app developers make Foundation A/V Core library calls to open and decode ProRes files and Afterburner is present ... then it gets uses. Developers had to be going out of their way to avoid Apple's libraries in order not to be setting the preconditions to leverage Afterburner. It is what Apple has been asking developers to do in the first place before Afterburner showed up.

However, you are also vastly overselling how much work Afterburner gets done. Afterburner has more of a multiplier effect due to the paritial CPU unloading it does. If the CPU isn't decoding ProRes that leaves more CPU workload headroom for other stuff ( effects , encode , etc. ). Those other tasks aren't disappearing. Likewise keep the decode process off the GPU (if tossing computation there. ). The user can work better with 12-16 core system as opposed to needing a 24-28 core system, but still have that baseline of 8-10 cores to do the rest of the workload being thrown at the workstation.


[quote[
Which leaves apple free to switch the CPU out for something else without losing as much performance in Afterburner-accelerated applications. [/quote]

Apple could throw a relatively week A-Series derivative in the Mac Pro because Afterburner was carrying most of the 'water' ? That would fail.

Will it work out that way? Who knows. But I see afterburner as Apple's exit strategy from intel at the high end.

Afterburner isn't an exit from Intel at all. Afterburner is a mechanism to get the ProRes format more traction in the video storage format space. That is mainly it. It will help ProRes RAW be adapted into more video cameras as a high end alternative format or enable HDMI RAW output be sent to a external recorder ( e.g. ATMOS ) to because encoded as ProRes RAW. For example

Panasonic DC S1H adding support recently.

And Apple putting ProRes Raw on Windows

Again to promote wider adoption of the format ( and draw a bit of a "underline" into the Afterburner+macOS+macPro combo having a performance edge in that expanded ecosystem. )

[ If some open video RAW format took off in adoption then Apple could selectively cover that also. But the afterburner focus would probably always include Apple's solution for this general task.]

That is completely orthogonal to Intel ( versus AMD or some other vendor ) in the CPU in the Mac Pro.
 
A12X has heterogenous cores - 4 high performance and 4 low power cores. The later ones are much slower than the former. So your comparison is pretty much meaningless.

For folks consumed with tech porn, CPU only benchmark scores, the "low power" cores can largely keep up if hand them code and data that largely sits in the on chip cache hierarchy .

They will "benchmark" better on many relatively computationally light workloads. If throw a highly vectorized ( e.g. AVX-512 , AVX-128 ) like workload at them they won't keep up. ( significantly increase the memory pressure, more SMT/Hyperthread friendly , etc. )

Have to go to a common notion of what the criteria is performance is. If it is the "lowest common denomiator" benchmark that runs on chip that is one thing. If a more holistic system performance that is another.


The other issue is how long the benchmarks run. A multicore 1-2 minute sprint or 2-3 hour computation. If the heat from the lengthy computation pushing the x86 chip toward the base frequency it can get closer ( presuming Apple's thermal management isn't tuned to 'hot rod' tech porn benchmarks ( it is probably is to some extent. It is a marketing tool many CPU implementors use. ) ).
 
This is going to be the death of the Mac computers as a whole. Arm Macs won’t have any compatability with any of the software available until the software developers update their software and most will be left behind. Microsoft tried to transition to ARM with the Surface Pro X and Windows 10 on ARM has been a failure. I expect this to fail as well, especially since ARM will probably not have the same performance for all tasks compared to X86-64.

Uhhh.... Have you been living under a rock? This is what happened with Catalina already. Any Mac applications not in current development already won't run. None of mine would even run in a virtual machine because VMs on Mac can't access GPU drivers and openGL just crashes upon initialization. Almost all programs that will run under Catalina are actively developed and should have no issues moving to ARM. Apple's guideline on Catalina? Keep an old Mac laptop around to run your older applications.

This sucks for me, but anecdotally, most of the Mac users I know aren't so attached to software and happily went out and bought all new software for Catalina. And there will be new software developed for ARM Macs.
 
  • Like
Reactions: futureisfilm
A12X has heterogenous cores - 4 high performance and 4 low power cores. The later ones are much slower than the former. So your comparison is pretty much meaningless.

Aaaaaactually, I'm afraid you have a good point there. My comparison is misleading. I've edited it.
[automerge]1588096101[/automerge]
Uhhh.... Have you been living under a rock? This is what happened with Catalina already. Any Mac applications not in current development already won't run. None of mine would even run in a virtual machine because VMs on Mac can't access GPU drivers and openGL just crashes upon initialization.

I mean, most of that is 32-bit, and if an app isn't available as 64-bit, that means it hasn't been updated in close to eleven years. So "any app not in current development" is a bit much.
 
Last edited:
Ah, that makes sense.

Either way, I am pretty sure the Neoverse N1 is more powerful then any Apple Chip as of right now. And that Apple will make their own version of it for the Mac Pro.

N1 is not powerful or less powerful.

N1 is a on paper design. If someone knows how to build it at 3nm it could be faster than A13.

But current Graviton 2 version isn't faster than A13. A13 is about 1.5x single core performance of Graviton 2's N1.
 
I use it to move assets into Poser. For that, it is great. Pity that is all it is good for. It is easily the most poorly coded 3d app I have used in the past 15 years.



That is true - and neither the 6,1 nor the 7,1 Mac Pro increased the professional base of OSX users.

Yes, my copy of 3Dx Max is an old version. All I use it for is converting assets from 3dx Max native format to .OBJ format. Same thing with my copy of Lightwave.

I have LW3D, Z-Brush, Poser. I deliberately made sure my choices were cross platform.

I was hoping for a mid-tower. But a 32 inch iMac with much improved specs would do. It won't stop me from getting a PC tower now. Apple had their chance with the new 'Mac pro' and blew it for the enthusiast/gamer crowd.

Azrael.
[automerge]1588098871[/automerge]
It is only comparable in Apple's marketing material, according to 3rd party reviews.
[automerge]1588040336[/automerge]


Sounds like GPU envy to me.

Like it or not, CUDA is very useful for many different types of computing. Just not on the Apple platform.

Well, ironically, Macs have cpu and gpu envy right now. Both Radeon and Intel are lagging their competition. Politics are getting in the way of Apple choosing what's best for the consumer. Unless with PPC, Apple have the choice to give customers better specs. And better value. The Radeon 5700XT has been available for ages and still Apple can't even offer that in the Mac range below £££6k.

Azrael.
 
It is only comparable in Apple's marketing material, according to 3rd party reviews.
[automerge]1588040336[/automerge]


Sounds like GPU envy to me.

Like it or not, CUDA is very useful for many different types of computing. Just not on the Apple platform.

CUDA is not useful at all except earning money for NVIDIA.

I'm currently using 2 2080Ti for machine learning but those are cuDNN's job and not related to CUDA.

CUDA is dead already and NVIDIA just try to make people believe they still need CUDA.
[automerge]1588099227[/automerge]
Noted, however, I wanted to see the performance Of the remote system when compiling apps.
[automerge]1588043209[/automerge]


Is USB4 backwards compatible with Thunderbolt? Not sure why it’s relevant. AFAIK, not all AMD mobos support thunderbolt?

You will never see the performance while compiling -- you have to run the binary to see the performance even you are not cross compiling.

All AMD mobos could run Thunderbolt today -- just plug in a Intel TitanRidge TB3 Card and that's it.

The only problem right now is no one except Intel have a good Thunderbolt controller. ASMedia is working on one and rumor says it will ship with next high end AMD motherboard.
 
Last edited:
  • Disagree
Reactions: ct2k7
CUDA is not useful at all except earning money for NVIDIA.

I'm currently using 2 2080Ti for machine learning but those are cuDNN's job and not related to CUDA.

CUDA is dead already and NVIDIA just try to make people believe they still need CUDA.
[automerge]1588099227[/automerge]


You will never see the performance while compiling -- you have to run the binary to see the performance even you are not cross compiling.

All AMD mobos could run Thunderbolt today -- just plug in a Intel TitanRidge TB3 Card and that's it.

The only problem right now is no one except Intel have a good Thunderbolt controller. ASMedia is working on one and rumor says it will ship with next high end AMD motherboard.

Cuda 'lock in.' So you can jack up prices on ray tracing GPUs which are 1st gen and not that optimised for that. Outside of that, Nv' are sorta 'meh.' Shame AMD don't have a gpu to put pressure on that pricing. But I'm pretty sure AMD will follow Nv' up the pricing ladder. Why not? Apple do it. Nv do it. Intel do it. Why shouldn't AMD please their shareholders? :p

Well, yeah. Nvidia, Apple and Intel are all examples of what happens when you don't have competition in the chosen markets. I think all three have been guilty of jacking up prices, greed and testing what the market will tolerate.

I wouldn't buy Nv' on principle or the current iMac. A gpu over £1k? I'd rather buy a rig with a Radeon 5700XT and a 12 core AMD Ryzen for that.

The latter of which would be a big step up from my Nv 680MX and my 4 core Intel. Apple sat on 4 cores...as did Intel for ages until AMD piled on the CPU core count pressure.

Azrael.
 
Well, yeah. Nvidia, Apple and Intel are all examples of what happens when you don't have competition in the chosen markets. I think all three have been guilty of jacking up prices, greed and testing what the market will tolerate.

I wouldn't buy Nv' on principle or the current iMac. A gpu over £1k? I'd rather buy a rig with a Radeon 5700XT and a 12 core AMD Ryzen for that.

The latter of which would be a big step up from my Nv 680MX and my 4 core Intel. Apple sat on 4 cores...as did Intel for ages until AMD piled on the CPU core count pressure.

Azrael.

Are you saying apple has no competition in “the chosen markets?” What markets would those be? They are a tiny player in the PC market (less than 10%). They have less than half the phone market. They are behind spotify in music streaming. What market do they have no competition in?
 
I don't have to say anything.

Just look at the iMac, Mac Mini.

They speak for themselves. ;)

Cocooned eco-systems whether CUDA NV or Mac show themselves.

Azrael.
 
Are you saying apple has no competition in “the chosen markets?” What markets would those be? They are a tiny player in the PC market (less than 10%). They have less than half the phone market. They are behind spotify in music streaming. What market do they have no competition in?

smart watches?
 
CUDA is not useful at all except earning money for NVIDIA.

I'm currently using 2 2080Ti for machine learning but those are cuDNN's job and not related to CUDA.

CUDA is dead already and NVIDIA just try to make people believe they still need CUDA.

CUDA means that my Render engines go faster than a CPU based render engine - Nvidia is earning that money.
 
CUDA means that my Render engines go faster than a CPU based render engine - Nvidia is earning that money.

No. Your renderer is not using CUDA. It is using NVIDIA provided custom library like OptiX.

CUDA it self is already dead for years. Even if NVIDIA's new card stop supporting CUDA your renderer will still works.

Right now NVIDIA is calling their whole AI/Render GPGPU solution "CUDA". It's not related to the CUDA programing language anymore.
 
No. Your renderer is not using CUDA. It is using NVIDIA provided custom library like OptiX.

CUDA it self is already dead for years. Even if NVIDIA's new card stop supporting CUDA your renderer will still works.

Right now NVIDIA is calling their whole AI/Render GPGPU solution "CUDA". It's not related to the CUDA programing language anymore.

It sounds like you’re quibbling....
 
  • Like
Reactions: ssgbryan
N1 is not powerful or less powerful.

N1 is a on paper design. If someone knows how to build it at 3nm it could be faster than A13.

N1 isn't particularly "on paper". Amazon may have added a few tweaks but the baseline of N1 is in Graviton 2 (G2). G2 is deployed. The other N1 baseline designs from Ampere and others are not going into production until much later, N1 is out there at this point.

But current Graviton 2 version isn't faster than A13. A13 is about 1.5x single core performance of Graviton 2's N1.

G2 isn't optimized for single threaded performance. There is no SMT which is about as "single thread" skew as you get but that is primarily to get the core implementation size down. The design is primarily skewed to as many "medium-big' cores as possible in the minimal space. ( so can get to high core counts 60-80 ) without using too much power. That high core count ( > 50 ) space is where the design is optimized to. Running single threaded jobs on a 60 core CPU is a huge mismatch ( unless running 60 separate users with single threaded jobs ). It will make for a decent server chip in a cloud services context. It wan't be as good a match to single user workstation workloads.

A13's 1.5x is also "sprint burst" . If loaded up 15 users with single threaded jobs on the A13 for 3-4 hours of workload the lead wouldn't be that high.
 
  • Like
Reactions: futureisfilm
N1 isn't particularly "on paper". Amazon may have added a few tweaks but the baseline of N1 is in Graviton 2 (G2). G2 is deployed. The other N1 baseline designs from Ampere and others are not going into production until much later, N1 is out there at this point.



G2 isn't optimized for single threaded performance. There is no SMT which is about as "single thread" skew as you get but that is primarily to get the core implementation size down. The design is primarily skewed to as many "medium-big' cores as possible in the minimal space. ( so can get to high core counts 60-80 ) without using too much power. That high core count ( > 50 ) space is where the design is optimized to. Running single threaded jobs on a 60 core CPU is a huge mismatch ( unless running 60 separate users with single threaded jobs ). It will make for a decent server chip in a cloud services context. It wan't be as good a match to single user workstation workloads.

A13's 1.5x is also "sprint burst" . If loaded up 15 users with single threaded jobs on the A13 for 3-4 hours of workload the lead wouldn't be that high.

Interestingly, Graviton 2 running 64 threads is barely faster than a 32 core EPYC.
But running 16 thread is dramatically faster than a 8 core EPYC.

It look like the design was flawed or they are targeting burst multi-VM usage (like T series instance type).

Anyway, A13 is just a 6W chip and if we put a fan on it it will 100% keep the boost clock forever. With this 1.5x performance we do not need a 64 core monster, a 32 core solution may already be faster (due to core to core communication cost) than Graviton 2.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.