Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
There is no right answer because GPUs are a hierarchy of "compute units" clustered together to share more and more items. At the lowest level you might have a set of "compute units" that share a scheduler (ie which instructions to send to the "compute units") and register file, at a higher level this "core" might share an L2, but at a lower level this core might be split into four "sub-cores" each with a separate L1 cache. Or other permutations!

These details matter because they determine things like how fast one set of threads can share data or synchronization with another set of threads. Beyond that is the issue of "how fast for WHAT?" Graphics performance depends not just on how many FLOPs, but also on things like how rapidly textures can be read, or on specialized hardware like geometry shaders; whereas AI performance depends on things like what types of numbers (FP16? BF16? 4bit integer?) are supported.

You can get some idea of how the various designs compare by looking at:
https://github.com/philipturner/metal-benchmarks
which (at the very top of the page, click on the arrow!) gives some numbers for Apple vs recent AMD and nV designs. It's clear that there has been some variation in how large "cores" are and how they are balanced over the past 8 years or so, but that all three vendors have now converged onto something very similar.
WOW! Thanks for the link. I’m still reading it and re-reading it!

One thing that really bothers me is that many of the subtopics on the GitHub repository are by programmers expressing a fervent desire to do more more GPGPU processing on Mac/macOS!

GPGPU has always fascinated me.

And I wish Apple had a specific software engineering team devoted solely to combing through macOS and finding as many GPGPU-suitable “compute” instruction refinements to the macOS codebase as possible, including low-level code, all APIs, colossal “Core” APIs and Kits, Frameworks, etc. (I don’t know that Apple doesn’t, but…I doubt it…or it’s within the larger macOS team and not a designated, dedicated group with one job!)

It would be nice to see existing Apple Silicon Macs (and devices running iOS for that matter) showing faster performance the more that suitable instructions are found that can be handed off the the GPU(s) instead of the CPU.

(What would the reaction be if Apple released a milestone macOS update that was shown to run everything 10% faster! I want another “Snow Leopard”!)

If done right, all existing software — without the need to even recompile — would inherit every performance refinement.

IDK, maybe a GPGPU-refined Macs/macOS could run 10% faster (depending on the task) or 20% faster (but I’d take 5%).

I agree with deprecating OpenGL — the thing is ANCIENT! And the insistence it be backwards compatible with every crusty legacy version still in its DNA is weighing it down worse than a ball and chain. (OpenGL seems like the “Flash” of graphics libraries.)

I wish other companies would take on the risk of “ripping off the Band-Aid” all at once — like Apple has done often — instead of nursing old technologies ad infinitum because they’re too risk averse — the risk that someone, somewhere can’t run their app.

I understand Apple’s conundrum: writing code that translates OpenGL, OpenCL, OpenML, Cuda (the parallel programming API, not the hardware), Vulcan — or at least MoltenVK and MoltenCL, does nothing to wean programmers off these old or non-Mac-optimized APIs. It only gives them an excuse to use them forever and never give them up. And then the exact same app will run worse on Macs than on all other platforms.

AND, as is obvious, realtime translation/interpretation exacts a performance overhead — pretty much defeating the whole purpose of Metal and any other accelerative Mac APIs that would employ translation/interpretation.

Lastly, is Machine Learning, Deep Learning and A.I.

Using any of these software technologies, can macOS “learn” how a user uses a software app and optimize it over time?

Say a Photoshop user uses it a certain way and never uses certain features of the app — or Final Cut Pro or DaVinci Resolve or Blender, etc.?

Can the OS keep track of what codebases of the software it always seems to load and ones it never does? Can it “cache” or prefetch more optimally, based on what it “learns” over time about how a user uses a software app?

If this were “a thing,” you could do a standard Final Cut benchmark test, and then, 3 months later, that same benchmark would improve! And continue to improve!

Can A.I. or ML find CPU code that’s suitable for handing off to the GPU instead of human coders wracking their brains?!

I remember writing Assembly code back in the day, but somehow the same app written in FORTH ran faster than in assembled assembly code! (I never understood how or why.) But FORTH was smarter than me — that much I knew.

Software technology is amazing, and Apple, more than anyone else, has (far more than once) made its customers feel like they’ve bought brand new hardware once some milestone OS updates are installed.
 
Honestly not concerned about discrete GPU's. The issue is with the score discrepancies between M1 variants. The GPU cores across all variants of the M1 run at the same speed, so performance should actually scale linearly with only a minor loss for each additional core. (Apple has increased memory bandwidth of each variant to make sure those cores can be "fed" without any lag. On the Max, Anandtech tested the bandwidth, and was able to push data to all GPU cores with a sustained bandwidth rate of 240GB/s. So we know it's not a memory bandwidth issue.)

Geekbench Metal scores...
M1 8-core, ~10W TDP, 68GB/s: 20440
M1 Pro 16-core, ~30W TDP, 200GB/s: 39758 (+95%)
M1 Max 32-core, ~60W TDP, 400GB/s: 64708 (+317%)
M1 Ultra 64-core, ~120W TDP, 800GB/s: 94583 (+463%)

As expected, the first tier does in fact scale linearly; double the cores, double the performance. But as we get higher the scores drop way more than they actually should. The Ultra loses over 300% which is an insane drop in performance.

As a side note: Unfortunately until we get applications that are actually optimized for Apple's GPUs (Metal), we won't see the performance they're actually capable of.

So, are you saying the SIP is fine; it’s being hindered by less-than-optimal low-level CODE? Or cores don’t scale proportionate to their number because of system level cache limitations or RAM bottlenecks? Limited/shortchanged TLB buffer? TSVs?

Edit: or thermal throttling that happens almost by default?
 
Last edited:
  • Like
Reactions: FriendlyMackle
The studio has twice the memory bandwidth as the mini, 400GBs vs 200GBs.

It’s not relevant to the comparison you’re making, but it’s worth noting, the plain, non-Pro M2 Mac mini has a memory bandwidth of just 100GB/s.

I doubt this was a cost-saving limitation — I’d be more inclined to believe Apple deliberately handicapped the base model for Marketing purposes.
 
Last edited:
Apple fans were saying these exact things about OLED on the phones until Apple started using them. According to rumors iPads are next. Eventually Macs will follow. On average, Apple is some 10 years behind Samsung in OLED adoption.
Correct and to give a little more perspective I watched today a youtube clip about Sony's last year QD-OLED TV which was named TV of the Year by multiple reviewers. Now as they guy was testing his new TV he said this: even if some Mini LED TV's are blighter they definitely don't match the precise Pixel level control of OLEDs TV's, great uniformity and most importantly the perfect contrast they provide so the HDR scene he was watching was the more impactful on his new OLED TV than on any other Mini LED TV. There's no coincident that most Windows laptops at 2023 CES have OLED options, it the next step for better screen quality.
 
Last edited:
  • Love
Reactions: SFjohn
All of you praising OLED on Samsung.. but I find image on Iphone much better looking than on any Android phone.

Open the same app on iPhone and on the best Android phone, and it looks better to the eye on iPhone.
The phone and TV OLED panels are quite different, according to my understanding. Also, I've never heard of a phone screen using QD-OLED.
 
  • Like
Reactions: Tagbert
I think these results are really great for year over year improvements. So many people seem to think that each year we are going to see Intel to M1 level performance increases each year which is not realistic. Look at iPhones over the years. Steady but significant improvement each year.

I guess all the people that claimed that M2 Pro/Max would be on 3nm we're wrong? It didn't make sense to me but man you would think that it was a given the way so many people talked.

Also I don't think we are going to see radical changes even on 3nm. It will certainly bring even better efficiency and performance but it is not like you are going from 14nm Intel to 5nm arm any more.
 
Correct and to give a little more perspective I watched today a youtube clip about Sony's last year QD-OLED TV which was named TV of the Year by multiple reviewers. Now as they guy was testing his new TV he said this: even if some Mini LED TV's are blighter they definitely don't match the precise Pixel level control of OLEDs TV's, great uniformity and most importantly the perfect contrast they provide so the HDR scene he was watching was the more impactful on his new OLED TV than on any other Mini LED TV. There's no coincident that most Windows laptops at 2023 CES have OLED options, it the next step for better screen quality.

You’re confusing a bunch of different technologies. The Quantum Dot OLED screens used in the Sony A95k and Samsung S95 last year (as well as the Alienware monitor) have nothing in common with any of their phone or laptop screens. Their phone and laptop screens have more in common with the WRGB OLEDs that LG Produces. They may produce QD OLED screens for notebooks one day but that isn’t the case now.
 
Okay so performance and longevity wise, which of the these two same priced systems am I better off with:
  • Mac Studio - Apple M1 Max with 10-core CPU, 24-core GPU, 16-core Neural Engine, 32GB, 1TB SSD; or
  • Mac Mini - Apple M2 Pro with 12‑core CPU, 19-core GPU, 16‑core Neural Engine, 32GB, 1TB SSD
Anyone who thinks 32 GB RAM will be plenty for the 2023, 2024, 2025, 2026, 2027 life of a new box should go with the M2 Mini's newer tech. However most users buying at that performance level in 2023 should be getting 64 GB RAM, not 32 GB; and 64 GB currently is only available on MBPs or on the year-old-tech M1 Studio with an additional $400.

Choice #3 is to wait for an M2 Studio to be released.
 
Last edited:
Anyone who thinks 32 GB RAM will be plenty for the 2023, 2024, 2025, 2026, 2027 life of a new box should go with the M2 Mini's newer tech. However most users buying at that performance level in 2023 should be getting 64 GB RAM, not 32 GB; and 64 GB is only available on the Studio.

Choice #3 is to wait for an M2 Studio to be released.
I would not be that certain for all users.
Many people can get away with 8 gb currently. Not that I would advise getting that. But it shows how efficient memory usage is on these machines.
It depends on what you do.
 
  • Like
Reactions: SFjohn and Tagbert
WOW! Thanks for the link. I’m still reading it and re-reading it!

One thing that really bothers me is that many of the subtopics on the GitHub repository are by programmers expressing a fervent desire to do more more GPGPU processing on Mac/macOS!

GPGPU has always fascinated me.

And I wish Apple had a specific software engineering team devoted solely to combing through macOS and finding as many GPGPU-suitable “compute” instruction refinements to the macOS codebase as possible, including low-level code, all APIs, colossal “Core” APIs and Kits, Frameworks, etc. (I don’t know that Apple doesn’t, but…I doubt it…or it’s within the larger macOS team and not a designated, dedicated group with one job!)

It would be nice to see existing Apple Silicon Macs (and devices running iOS for that matter) showing faster performance the more that suitable instructions are found that can be handed off the the GPU(s) instead of the CPU.

(What would the reaction be if Apple released a milestone macOS update that was shown to run everything 10% faster! I want another “Snow Leopard”!)

If done right, all existing software — without the need to even recompile — would inherit every performance refinement.

IDK, maybe a GPGPU-refined Macs/macOS could run 10% faster (depending on the task) or 20% faster (but I’d take 5%).

I agree with deprecating OpenGL — the thing is ANCIENT! And the insistence it be backwards compatible with every crusty legacy version still in its DNA is weighing it down worse than a ball and chain. (OpenGL seems like the “Flash” of graphics libraries.)

I wish other companies would take on the risk of “ripping off the Band-Aid” all at once — like Apple has done often — instead of nursing old technologies ad infinitum because they’re too risk averse — the risk that someone, somewhere can’t run their app.

I understand Apple’s conundrum: writing code that translates OpenGL, OpenCL, OpenML, Cuda (the parallel programming API, not the hardware), Vulcan — or at least MoltenVK and MoltenCL, does nothing to wean programmers off these old or non-Mac-optimized APIs. It only gives them an excuse to use them forever and never give them up. And then the exact same app will run worse on Macs than on all other platforms.

AND, as is obvious, realtime translation/interpretation exacts a performance overhead — pretty much defeating the whole purpose of Metal and any other accelerative Mac APIs that would employ translation/interpretation.

Lastly, is Machine Learning, Deep Learning and A.I.

Using any of these software technologies, can macOS “learn” how a user uses a software app and optimize it over time?

Say a Photoshop user uses it a certain way and never uses certain features of the app — or Final Cut Pro or DaVinci Resolve or Blender, etc.?

Can the OS keep track of what codebases of the software it always seems to load and ones it never does? Can it “cache” or prefetch more optimally, based on what it “learns” over time about how a user uses a software app?

If this were “a thing,” you could do a standard Final Cut benchmark test, and then, 3 months later, that same benchmark would improve! And continue to improve!

Can A.I. or ML find CPU code that’s suitable for handing off to the GPU instead of human coders wracking their brains?!

I remember writing Assembly code back in the day, but somehow the same app written in FORTH ran faster than in assembled assembly code! (I never understood how or why.) But FORTH was smarter than me — that much I knew.

Software technology is amazing, and Apple, more than anyone else, has (far more than once) made its customers feel like they’ve bought brand new hardware once some milestone OS updates are installed.

Apple ARE doing this.
But many of the people complaining about GPGPU are acting in terrible bad faith. They are willing to use MKL in their code, but not willing to use Accelerate. They are willing to rewrite their code for CUDA, but not for Metal.
You've doubtless seen the type of person in these forums who will find something to complain about no matter what Apple does – now why do you expect that personality type to be limited to non-programmers?
Just like there's no value to Apple in dealing with the person who insists every year that they would buy a mac or iPhone if only Apple did one particular thing, but who has never bought an Apple product in their life and keeps changing the supposed deal breaker, so there's no point in Apple trying to accommodate people who simply will not use Apple no matter what.

Where there is genuine work to be done by Apple (as in a specific set of realistic technical complaints) Apple has done the work and continues to do so. TensorFlow and PyTorch have been hooked up to Accelerate. Some users could not use the BLAS/LAPACK within Accelerate because it did not incorporate some of the newest APIs, so that was fixed. Apple gave the Linux crowd what they thought would be necessary to implement a foreign OS on macs, and then gave them more when specific issues were discovered.
Apple has had 40 years of experience with this issue of how much to allow for obsolete code vs "compelling" newer code. They are well aware (as are those of us who have been along for the entire journey) of how much is worth doing and no more. For example at each of the 68K to PPC, then PPPC to x86, then x86 to ARM transitions, Apple has done LESS to allow old code to execute. This isn't because they forgot! It's because they saw the extent to which making things easier the last time was used as an excuse...

As for the larger project of more widespread throughput computing (via AMX, GPU, or NPU) people are CONSTANTLY thinking about this, inside Apple and outside. Pretty much no-one outside Apple seems to appreciate quite what AMX is, or the extent to which it is evolving beyond matrix multiply to a sort of "AVX-512 done right". But Apple keeps evolving it every year both in the HW (it's notably faster on M2 than M1) and in SW (eg things like computing multiple simultaneous special functions in a 512b-wide vector).
But things take time!
Especially when there is widespread disagreement about what's required. You want to compute on the GPU? OK, so does that mean you REQUIRE IEEE floats with everything that implies from NaNs to denorms to rounding modes? And you're OK if that means you now only get half the throughput you could get if we dropped those? Well, opinions differ! Some people legitimately need that IEEE machinery, some people are well aware that they don't, some are willing to make the effort to work around the lack while others are not.
 
It’s not relevant to the comparison you’re making, but it’s worth noting, the plain, non-Pro M2 Mac mini has a memory bandwidth of just 100GB/s.

I doubt this was a cost-saving limitation — I’d be more inclined to believe Apple deliberately handicapped the base model for Marketing purposes.
Clearly you have no CLUE how the memory controllers are implemented on M1/M2 vs Pro vs Max.
Or the extent to which 100GB/s is an extraordinarily high memory bandwidth for this class of SoC (which is basically the i3 of Apple's line).

You can read my PDFs (volume 3) to find out the engineering details.
https://github.com/name99-org/AArch64-Explore

Honestly this endless claiming (by people who know nothing of SoC design) that "Apple did it because they suck" rather than understanding the engineering is so damn tiresome. Be Better!
 
  • Like
Reactions: SFjohn
Clearly you have no CLUE how the memory controllers are implemented on M1/M2 vs Pro vs Max.
Or the extent to which 100GB/s is an extraordinarily high memory bandwidth for this class of SoC (which is basically the i3 of Apple's line).

You can read my PDFs (volume 3) to find out the engineering details.
https://github.com/name99-org/AArch64-Explore

Honestly this endless claiming (by people who know nothing of SoC design) that "Apple did it because they suck" rather than understanding the engineering is so damn tiresome. Be Better!

I’M DULY SCOLDED! I’LL BE GOOD!
 
  • Like
Reactions: scottrichardson
Apple ARE doing this.

Apparently you’re privy to information I’m not. All apologies.

But many of the people complaining about GPGPU are acting in terrible bad faith. They are willing to use MKL in their code, but not willing to use Accelerate. They are willing to rewrite their code for CUDA, but not for Metal.

That’s precisely why I wrote, “I understand Apple’s conundrum.”

You've doubtless seen the type of person in these forums who will find something to complain about no matter what Apple does – now why do you expect that personality type to be limited to non-programmers?
Just like there's no value to Apple in dealing with the person who insists every year that they would buy a mac or iPhone if only Apple did one particular thing, but who has never bought an Apple product in their life and keeps changing the supposed deal breaker, so there's no point in Apple trying to accommodate people who simply will not use Apple no matter what.

But Apple customers b**ch**g about Apple is a time honored tradition! j/k

(If you want to truly see an alleged Apple fan ceaselessly complain about Apple and give them almost no credit for anything ever, all you have to do is watch pretty much any Luke Miani YT video.)

Seriously though, I don’t list my hardware/software in my “signature” to save people screen real estate, but if you’re suggesting I’m not an Apple enthusiast and have never owned an Apple product, I owe you an apology for creating that false perception.

For better or worse, I’ve never owned a non-Apple computer or phone or “MP3 player” or smartwatch or iPad-ripoff in my life! I still have two working Newtons!

My first computer when I was a tot was an Apple ][e and then it was Macintosh all the way after that. I lived through the difficult transition to System 7 and the transitions from 680X0 to PowerPC to Intel to Apple Silicon.

I don’t know a thing about Windows and love to tell my friends who know I’m a Geek and who ask me to “fix” or help them with their Windows PC, “Sorry. I don’t do Windows.”

I own an iPhone 14 Pro Max; I’m wearing a silver stainless steel Apple Watch 8; I’m writing this on a 2022 iPad Pro with 2TB. At the end of the month, after I pay my Apple Card down to $0, I’ll be ordering a 16" MacBook Pro M2 Max with 8TB of storage and 96GB of RAM which will cost me $7,433.29 USD including tax (don’t try to talk me out of it — I have to have it!). I look forward to editing video on it and using Maya and some other pretty resource-intensive software apps. (I’m also starting to dip my toes in the surreal world of A.I.)

So I’m no PC troll. (I’m pretty sure the term “troll” actually originated in Apple boards and chat rooms because there was always some insecure PC user in there shouting, “APPLE SUUUUUUUUUUX!” or “Enjoy your one button mouse!” PC trolls. 😤

I also haven’t posted in YEARS, because I found that whenever I got something wrong, there were always plenty of snotty, self-righteous, know-it-alls who would put me down, insult me and call me stupid (which I don’t need! I can get that at home any day!).

Where there is genuine work to be done by Apple (as in a specific set of realistic technical complaints) Apple has done the work and continues to do so. TensorFlow and PyTorch have been hooked up to Accelerate. Some users could not use the BLAS/LAPACK within Accelerate because it did not incorporate some of the newest APIs, so that was fixed. Apple gave the Linux crowd what they thought would be necessary to implement a foreign OS on macs, and then gave them more when specific issues were discovered.
Apple has had 40 years of experience with this issue of how much to allow for obsolete code vs "compelling" newer code.

I don’t think “Apple” has 40 years of experience at anything. Unless there is a non-exec career employee nearing retirement, I don’t think there’s a single Apple employee left with 40 consecutive years of experience who’s gone through all the hardware/software transitions and changes. Steve Wozniak, maybe, but I don’t think he was ever a Mac fan and AFAIK he loved to stick with things like finding ways to optimize 6502 machine language code to save a processor cycle or two.

They are well aware (as are those of us who have been along for the entire journey) of how much is worth doing and no more. For example at each of the 68K to PPC, then PPPC to x86, then x86 to ARM transitions, Apple has done LESS to allow old code to execute. This isn't because they forgot! It's because they saw the extent to which making things easier the last time was used as an excuse...

I’m pretty sure I made this point myself about the tightrope walk of weaning reluctant programmers off antiquated, deprecated APIs without making it easy for them to never convert and to continue to use ancient APIs forever and ever.

Apple is always the tip of the spear for shouldering the burden of risk in order to advance the state of the industry — like banning the popular (at the time) Flash from iPhones from day 1. (Or eschewing all legacy ports in favor of USB on the 1998 iMac. Every industry “expert” predicted DOOM!)

As for the larger project of more widespread throughput computing (via AMX, GPU, or NPU) people are CONSTANTLY thinking about this, inside Apple and outside. Pretty much no-one outside Apple seems to appreciate quite what AMX is, or the extent to which it is evolving beyond matrix multiply to a sort of "AVX-512 done right". But Apple keeps evolving it every year both in the HW (it's notably faster on M2 than M1) and in SW (eg things like computing multiple simultaneous special functions in a 512b-wide vector).
But things take time!
Especially when there is widespread disagreement about what's required. You want to compute on the GPU? OK, so does that mean you REQUIRE IEEE floats with everything that implies from NaNs to denorms to rounding modes? And you're OK if that means you now only get half the throughput you could get if we dropped those? Well, opinions differ! Some people legitimately need that IEEE machinery, some people are well aware that they don't, some are willing to make the effort to work around the lack while others are not.
“But things take time!”

True. For example — if I understand it correctly — after all this time, there are still too many Mac apps ostensibly “optimized for Apple Silicon” that aren’t all that “optimized for Apple Silicon.”

“You want to compute on the GPU?”

No. I want suitable ”compute” instructions processed on the GPU only if it results in appreciably faster execution for a superior UX. Otherwise, there’s no point.

I don’t want instructions to arbitrarily run on the GPU just for the sake of it.

I know I’m stupid — but I’m not that stupid.
 
Last edited:
  • Haha
Reactions: Burnincoco
Apple fans were saying these exact things about OLED on the phones until Apple started using them. According to rumors iPads are next. Eventually Macs will follow. On average, Apple is some 10 years behind Samsung in OLED adoption.
The reason why Apple implements screen tech like OLEDs so much later than someone like Samsung is simple mathematics and production capabilities, where Samsung produces a few million at most, and Apple produces hundreds of millions per year of a product.

This is the thing…. until they somewhat ’feature match’ GPUs from AMD/Nvidia, it is going to be dependent on the particular application. It will probably be faster at some things (like I’ve seen some pretty big scenes being rotated and perform quite well), while at other things, like those requiring ray-tracing, it might be quite poor. (Games, are a whole other set of issues.)

Yes, I’m encouraged by that!
This will sound really dumb…but I’d love someone to explain to me what benefit ray-tracing would have for us in practice?
 
Okay so performance and longevity wise, which of the these two same priced systems am I better off with:
  • Mac Studio - Apple M1 Max with 10-core CPU, 24-core GPU, 16-core Neural Engine, 32GB, 1TB SSD; or
  • Mac Mini - Apple M2 Pro with 12‑core CPU, 19-core GPU, 16‑core Neural Engine, 32GB, 1TB SSD
I'd go Studio Max.

I think the play for a "performance desktop" on a budget is as follows:

1) M2 (regular) Mini with upgraded ram to 24GB for $1000. I have an M2 Air this way and it's awesome. I can rarely tell any difference to my M1 Ultra Studio. The max ram of 24GB really makes a difference for my workflows.

2) M1 Max Studio, base model, zero upgrades, for $2000.

When you upgrade the M2 Mini to the Pro, it gets too close in price to the M1 Max Studio, so not really worth it, IMO.
 
Absolutely incorrect, on multiple levels. You need to go do some research on the significant gains made in the PC/OLED space, including QD-OLED. As for IPS screens being on-par with OLED? Plainly -- plainly -- you've never done a side-by-side comparison. OLED colors, HDR performance, infinite contrast, true pure blacks, and the insanely fast pixel response times (fastest pixel display tech on the market by far), just blow IPS out of the water.

Once you've used an OLED monitor with your computer -- you won't go back to IPS or VA.
I’ll agree with you on the quality and richness of color and depth in OLED screens versus IPS, but although QD-OLED is much brighter (I think on par with LED? or nearly so?), I still would be concerned about burn-in on a monitor that I planned to keep for more than 3 years. So, with my iphones, I’ve never seen burn in on the OLEDs, but I keep these phones for 2 years at most before I sell them and upgrade. Going back to the iPhone X and now on the 14 Pro Max, I’ve never seen burn in, even when I was ready to upgrade at the 2.3 year point. But how long after that the screens would have lasted, I don’t know. All I do know is that on various forums people with OLED tvs routinely still complain about burn-in within 2-3 years. Maybe those people are unusual? I don’t know, but given the cost of a quality monitor (or a large high quality TV), I’d prefer to go the safer route of mini-LED, where you have similar richness and contrast, but no danger of burn in. Even if OLED still does look better—while it lasts!
 
The reason why Apple implements screen tech like OLEDs so much later than someone like Samsung is simple mathematics and production capabilities, where Samsung produces a few million at most, and Apple produces hundreds of millions per year of a product.


This will sound really dumb…but I’d love someone to explain to me what benefit ray-tracing would have for us in practice?

Right now the standard way to create "computer graphics" imagery (video games, but also things like Apple AR) makes use of the stuff you presumably know about - triangles, texture mapping, things like that.
Within that environment things like ray tracing can help add realism, for example by figuring out where shadows should be. Even if you don't care about games, this is important for AR in making artificial models "projected into" reality look more natural.

Beyond this, one can use ray tracing to make very high quality images as you know (eg movies) which is probably not of interest to most people. But the next stage beyond ray tracing graphics is something called NERFs (Neural Reality Fields) which is using AI to create imagery in a variety of ways, for example creating a model of a house you can walk through and explore based on still images of each room, or creating a model of some environment [indoor or outdoor you can explore] based on the scenes in a movie.
Here's a (very rough) video of the issue.

My understanding (knowing VERY little about this!) is that to make this work, you want both the AI part (ie NPU and all that) AND hardware ray tracing which is "informed" by the AI part as to what to "draw".
We also know that Apple is very interested in NERF. This is probably for their own use and also because they believe that many of their customers (both working professionally in video, and amateurs playing with graphics) will want quality NERF support once the technology takes off. (Same way Apple has been trying to ensure that Diffusion AI art is well support on Apple products as this tech starts to become well known. Maybe in two years we'll be able to say "Hey Siri, draw me a picture of a cow dressed like an LA Lakers fan"... )
 
I would not be that certain for all users.
Many people can get away with 8 gb currently. Not that I would advise getting that. But it shows how efficient memory usage is on these machines.
It depends on what you do.
I was burned the last time I bought a computer with only 8GB RAM, my 2013 MBP Retina 15”. Even 3-4 years ago, it would run out of memory with just one photo open in Photoshop and InDesign running in the background! I would have to close ID, clear my RAM, and then work on photos. Not even particularly large photos.
This past year, with InDesign, and this is still the old 6.0 standalone version on the laptop, I have to clear RAM immediately before I can do anything.
I know that’s an extreme use case! But, I can’t help but think that if I had doubled the RAM to 16 (which now seems a paltry amount!), this laptop would still be much more functional.
But it has really reached it EOL. The screen has finally begun to slightly delaminate at the juncture where I open the case. But Apple really does build things to last, because otherwise the computer is in great shape—it even gets around 4.5 hours on battery—which is excellent given its age and that even when new, battery life was probably only around 6-7 hours with the Intel i7 quad-core processor!
 
The reason why Apple implements screen tech like OLEDs so much later than someone like Samsung is simple mathematics and production capabilities, where Samsung produces a few million at most, and Apple produces hundreds of millions per year of a product.
The latest statistics on the smartphone market share worldwide show that as of October 2022, US phone maker Apple leads the pack, with a market share of 28.43%. This means that nearly three in 10 smartphone users worldwide use an Apple phone.

Second on the list of the most popular smartphone manufacturers is Samsung. The South Korean brand has 28.19% of the smartphone market share, just a marginal 0.24% less than Apple.

Apple and Samsung dominate the smartphone industry and own a combined 56.62% of the total smartphone market share. In fact, these two smartphone manufacturers have been leading the market since 2013.

Source
 
This means that nearly three in 10 smartphone users worldwide use an Apple phone.

Careful - “market share” is usually based on revenue rather than number of units sold. Apple and Samsung could get their high ranking by selling relatively few, expensive phones - a lot of the also-rans on that chart could be selling truckloads of cheap and cheerful Android phones. Probably doesn’t change the argument, but you can’t just convert from market share to “3 in 10 people” like that. OTOH it also depends on how long people keep their phones.

Also, the subject was supply and demand on displays - most of the also-rans will be using the same displays (and other components - some smaller names are probably selling the same phones re-badged) from a small handful of display makers (including, AFAIK, Samsung). Those generic displays are going to be the ones turned out in the largest quantities.
 
  • Like
Reactions: SFjohn
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.