Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
After factoring in exchange rate, Australian Government taxes and regulation by ACCC that Apple provide seven days  Care plus two year warranty, the MBA 💻 M3 Pro has a starting price of:

MBA 💻 Mc Max.png

Sucks to live in a highly taxed and regulated country. Enjoy your liberties my American cousins.
 
For server CPUs, sure. AWS, Azure, GCP aren't going to buy Threadripper, nor does AMD's marketing aim it at them. They talk about "creative professionals" (sound familiar?), and yes, those will have burst tasks that scale to many cores. But they'll also be idling a ton. Most of your time in a work day isn't actually spent rendering something. It's spent interacting with the UI, looking for the right file, or even more mundanely reading/answering e-mail or participating in some WebEx conference. Not to mention all kinds of background stuff. Single-core and heterogenous cores will help you more in those moments (which will likely make up the majority of your workday) than a high core count.

(You can mitigate this somewhat by getting a Threadripper shared by multiple members of the team, but now you're introducing complexity — much higher latency due to networking, most notably. The team won't love that approach.)
When you buy a workstation chip, you won't be worrying about how it will perform in webex. What you say is valid in isolation, just doesn't really makes sense in this context. It's still how fast the rendering will complete what matters (btw I hate that in Apple forums we always end up with rendering..). Or when your worsktation processes telescope data in the morning, your concern will be how fast it isolates the interesting data points, not how snappy will be the e-mail client when writing about it to team members. Or whether the simulation started Friday afternoon will be ready for Monday or not, and you miss publication deadline, and spend the next weeks worrying about that someone else overtakes you.
 
  • Like
Reactions: TheNewLou
When you buy a workstation chip, you won't be worrying about how it will perform in webex.

Primarily? No. But when that's your day-to-day machine, you'll care how it behaves.

What you say is valid in isolation, just doesn't really makes sense in this context.

I'd throw that right back: yes, of course high multi-core results are great in isolation. But in reality, you'll either

a. be using that machine for a lot more mundane stuff as well, at which point single-thread matters more, and e-cores help keep it cool, quiet, and environmentally friendly / cheaper for your energy bill, or
b. dedicate the machine for an entire team and have people take turns using it, which is fine for the "have it render over the weekend" scenario you bring up, but otherwise a lot more annoying.

It's still how fast the rendering will complete what matters (btw I hate that in Apple forums we always end up with rendering..).

Well, what else would you end up with? Many high-performance tasks are either on the GPU instead these days (including, well, rendering), or they aren't heavily parallelizeable (software development — too reliant on I/O, and the dependency tree makes it hard to sync, so you're probably not gonna be scaling your build system to 96 cores, especially when you're in a JIT toolchain like Java or .NET).

Or when your worsktation processes telescope data in the morning, your concern will be how fast it isolates the interesting data points, not how snappy will be the e-mail client when writing about it to team members. Or whether the simulation started Friday afternoon will be ready for Monday or not, and you miss publication deadline, and spend the next weeks worrying about that someone else overtakes you.

Without question, yes.
 
Incredible jump over the previous Max chips!

STOP! No forum cred for you today hurling a positive comment like that around here.

Next time try and use crowd-pleasing words like underwhelmed/lame/pathetic/pitiful/bored/snoozer/hot garbage/etc. somewhere in your comment. Bonus points are awarded when that's accompanied by an oversized high school-ish eye-roll.
 
22 hours almost idling(web browsing, movie watching), totally worth nearly $5k for this, really nothing fancy.
Get back to me once you get 22h with decent graphics related work or gaming, then Apple can show off with 22h.

Anyway, regarding professional graphics, there is no workaround Nvidia RTX (ex. Quadro) cards, Apple graphics is a toy compared to Nvidia.

But yeah a speedy Apple Laptop might be useful to compile Apple Software, that’s it, but just until the heat and throttle kicks in.
You sound like a bitter Windows user, knock it off mate. It’s a freaking laptop that unplugged has more horses than whatever rubbish you’re spouting. We are not talking about a desktop! Take your professional graphics and stick them as your obviously you’re not Campari be apples to oranges.

Heat and throttle? Yes, that’s your average non-Apple silicon laptop talking. Look elsewhere with you’re 1km extension lead…
 
Except the Macbook Pro is portable and has a built in screen, keyboard, trackpad, audio etc.
Generally agree but these desktop peripherals & inputs/outputs aren’t replaced (necessarily) and plug into your new desktop thus it’s 99% a nonissue.
 
Depends on your use case and technical skills but in the end it’s software availability that really hurts Apple too. They can solve that - they’re worth &#3@&%%ing 3 trillion dollars as a company but for whatever reason they won’t.
Valve's steam deck doesn't run Windows but it can play all windows games, all this power with these m3 chips and you can only play apple arcade games.
 
Valve's steam deck doesn't run Windows but it can play all windows games, all this power with these m3 chips and you can only play apple arcade games.

Precisely so. And it's such a shame because I had the opportunity to test emulators running under Apple's iPhone and iPad.

They're such small little beasts: the iPad can run a Wii emulator and remain warm to cold.

But Apple clearly wants to dictate what and how you will run; that list doesn't include emulators, so they keep on a cat and mouse game to blacklist emulators.

So, what's the point of having so much power, but not being able to use it at all?

And of course, I mentioned emulators, but there are many other legitimate cases that Apple just won't accept that would benefit of the potential of those chips, but Apple actively blocks it.

For example, try to compile anything under an iPad. It's a nightmare. You'll only be able to do it under a sandboxed environment. MAYBE.
 
  • Like
Reactions: Ghengis LeMond
Good times. Just think: a year from now we’ll be talking about how the 16” MacBook Pro M4 Max is faster than the Mac Studio with M3 Ultra 😆
It will not be as dramatic until they switch to 1nm which is not happening for at least few years . M4 and M5 will be incremental much like M1 to M2
 
I have a fully specced-out M2 Ultra Studio on order. Do you think it's wise to cancel and wait? I just don't know how long it will take for an M3 Ultra Studio to arrive...

For context, it's replacing a 2019 Mac Pro and is used for casual 3D work in Blender, Motion etc.
If you still have the option to cancel, of course you should, are you kidding.…

apple is guaranteed to release m3 ultra next year, use worst estimate by q3…
 
It makes perfect sense given that the M1 and M2 Max chips had 12 cores (8 performance + 4 efficiency), while the M3 Max has a big jump to 16 cores (12 performance + 4 efficiency). 50% more performance cores, in addition to all the cores being faster, would logically lead to a result like this.
It's also worth noting that Geekbench 6's multicore calculation is designed to be more of an approximation of real world workflows, not all of which scale neatly and evenly to additional cores. Geekbench 5 would scale up proportionate to the number of cores (ie: an M2 Ultra would have a multicore score double that of the M2 Max because the Ultra has double the cores of the Max) but Geekbench 6 won't, because it takes into account that not of the cores will be utilized at the same time.

So yes, the M3 Max will perform comparably well with an M2 Ultra for many users in multicore performance because it has a smaller number of faster cores vs the Ultra, whereas the Ultra will likely perform better in highly specialized parallelized workflows.
 
  • Like
Reactions: canon-cinema-0r
Interesting how new pro level processors with a more advanced manufacturing process released LATER is faster.

I learn a lot around here.

People were skeptical with A17 Pro performance because a jump to 3nm only gains +15% performance.

But people also forget that A17 Pro was a jump from 4nm to 3nm.

M3 Family is however jumps from 5nm to 3nm then gains 20%
And with Apple keep comparing M3 family to the older processors that's why people didn't expect any bigger improvements.
 
Scary fast is right. As a consumer I wish I waited all of a month to get the M3 Max MBP 16". But, as a shareholder.. 🐳
 
Thinking about m3 Pro: it’s more like a scaled up m3 than it was before on the prior versions. Less Performance scores, good (calculated and scalable) results around GB 17k (11500x1,5) -> this may be the door opener for a 15inch MBAir with M3pro because of less thermal constraints.
This is a great take.

I feel that, if there’s been a problem with the M-series branding, it’s that the Pro chips haven’t catered to a specific audience. Max is perfect for professionals, Ultra is for a minority and the base chip is for the majority.

But now that the base M3 is so good, it could realistically cater to previous Pro chip customers. Now it’s sits for those who just want a little more and have the other specs to go along with (ie multiple displays, I/O).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.