Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
They don't cost $3k-4k....
The machines that don’t exist? Yea I agree they don’t cost that much. Seen razer blade 16? With a 4070ti it is exactly $4000 with a warranty. Seen that g14 with 4090 yep $3200. Seen them Lenovos or alienwares yea. I rest. They are over $3000 for a nice machine. Sure you can get one cheaper with lesser specs but again nobody is putting a 4080 with a 15 watt cpu.
 
  • Like
Reactions: jeffpeng
The machines that don’t exist? Yea I agree they don’t cost that much. Seen razer blade 16? With a 4070ti it is exactly $4000 with a warranty. Seen that g14 with 4090 yep $3200. Seen them Lenovos or alienwares yea. I rest. They are over $3000 for a nice machine. Sure you can get one cheaper with lesser specs but again nobody is putting a 4080 with a 15 watt cpu.
Uh that is now like a 50w cpu, but anyway…
 
Given Apple’s past actions, they’d jack up the base price by $200, so that M3 16/512 would cost $1799 instead of $1599 and they’d simply drop the 8GB version. They did that for the iPhone 15 Pro Max and they probably did it for the base M3 MacBook Pro anyway when they upped the base version to 512GB from the M2 13” MBP’s 256GB base. If they had kept 256GB as the base for the 14” M3 MBP, they likely would have charged $1399. They tossed in the chassis and all the goodies you get from a higher MBP (MagSafe, ports, screen, speakers) just to assuage criticism for the price hike, and yes it was a price hike since that model replaces the 13” MBP. They called it a lower base price from the M2 Pro’s $1999 price, but that’s what marketing does.

That they will just bake a price hike into the base price is why I oppose changing the base configuration to 16GB because Apple would make it more expensive for the people who don’t need 16GB. Apple is not going to just toss it in for free. If Apple were to miraculously make the next model 16GB base without raising the price, I’d be all for it. Nobody opposes the concept of more memory, but we don’t want anyone to pay for what they don’t need. But until their market research tells them that people don’t want 8GB anymore as their base, they’re going to continue offering the same amount. Loud voices on tech chat forums aren’t going to change their mind. Only supply and demand will dictate that. The market speaks. Most people are just fine with 8GB.
To be honest, even that would be a more ideal solution than leaving 8GB on the base model IMO. A lot of retailers don't stock the RAM-upgraded models (some do, some don't), and people in other countries especially apparently have a lot more difficulty with this if they aren't near an Apple store and don't want to wait for a build-to-order.

Furthermore, the models that go on deep sales at retailers are often the base models, and if the base model is a higher specced machine, there would be opportunities for people to get it at a lower price when these sales happen.

Still not saying it's ideal (in my opinion, 16GB should come standard once we're talking about $1600 price brackets).
 
To be honest, even that would be a more ideal solution than leaving 8GB on the base model IMO. A lot of retailers don't stock the RAM-upgraded models (some do, some don't), and people in other countries especially apparently have a lot more difficulty with this if they aren't near an Apple store and don't want to wait for a build-to-order.

Furthermore, the models that go on deep sales at retailers are often the base models, and if the base model is a higher specced machine, there would be opportunities for people to get it at a lower price when these sales happen.

Still not saying it's ideal (in my opinion, 16GB should come standard once we're talking about $1600 price brackets).
I think it's time for 8GB computers to just die across the board. The Air should start at 12GB min and the Pro at 16GB or 24GB since they are embracing the odd sizes now.
 
  • Like
Reactions: eltoslightfoot
He would be speaking different things if he wouldn‘t been on PayRoll for Apple.
Even a casual user will feel the impact of having just 8 GB after opening some more browser windows and tabs, maybe Slack and/or Office.
 
  • Like
Reactions: TheMacPotato
With regards cost of memory upgrades. WINTEL costs are minimal as its plug in. Unified memory isn't. This is actually a reason to increase the base configuration to 16Gb. as removing the 8Gb reduces costs for producing that configuration, allowing much greater production of the 16Gb base devices.

Some have questioned if people have even owned a silicon...we have...about 80 units of the m1 iMac's in service, and yes its a great machine and a great introduction to Mac silicon, but its a fallacy to suggest they don't work better with 16Gb for many users. We kept clear of M2,as I didn't believe they would surpass the M1 by sufficient to warrant economical replacement, and with regards M3, I won't be upgrading the base iMacs at 8Gb or paying for the 8Gb upgrade, in some way because of the success of the original iMac M1, and as ever there's always another chip around the corner, and yes the M1 exceeded our expectations, both for our internal use and for our customers.

Will though look at other devices as we could use something higher up the computer processing food chain for some of our other work.

The cost for Apple for RAM is much greater because of the nature of unified memory, but its not the memory component itself but the configuration incorporating it as unified, so remove the 8Gb and start at 16Gb then that cost is negligible, and if anything will save by removing the 8Gb configuration, let alone potentially sell more units and where a $20 hike would cover it and also appease the critics and become a PR success over some WINTEL 8Gb base configurations.

As Apple have aspirations on gaming, then 16Gb base is the logical move.

OFF TOPIC: Although I spoke to Steve many times, I never spoke to Woz, so all the best. Get well soon Woz.
 
Last edited:
  • Like
Reactions: Rokkus76 and ric22
Some have questioned if people have even owned a silicon...we have...about 80 units of the m1 iMac's in service, and yes its a great machine and a great introduction to Mac silicon, but its a fallacy to suggest they don't work better with 16Gb for many users. We kept clear of M2,as I didn't believe they would surpass the M1 by sufficient to warrant economical replacement, and with regards M3, I won't be upgrading the base iMacs at 8Gb or paying for the 8Gb upgrade, in some way because of the success of the original iMac M1, and as ever there's always another chip around the corner, and yes the M1 exceeded our expectations, both for our internal use and for our customers.
No one has ever said that 16GB wouldn’t work better for many users. Obviously, the more the better. 64GB would work great for almost all users, but very few want to pay for that much. The question is whether 8GB is enough for a lot of users. Not all, but some. That ”some” has to be numerous enough for computer makers to sell 8GB as their base systems. The answer, according to the entire computer industry, is a resounding yes since everybody sells 8GB systems. If it weren’t, nobody would be selling 8GB computers, yet all of them do. Over time, 4GB base was phased out for 8GB. When the time comes, 8GB will be phased out, too, but that time apparently isn’t now. For that subset of users that get along fine with 8GB, they will buy the base units and will be quite happy with them.

For everyone else, we either have to buy configurations that start with more power and more RAM or we simply upgrade the base RAM. A significant problem I have with many people wanting free RAM is that they consider their needs are everyone else’s needs, too, and can’t imagine anyone needing less. The end result is people who don’t need that much having to pay for stuff they don’t need. I don’t trust Apple (or any other company) enough to think they would provide free RAM out of the goodness of their heart. Inevitably, and it’s been proven time and time again, when they boost base specs, they also boost the price.
 
Nothing you’ve said contradicts anything I’ve said, except for a few minor errors you’ve made. I still don’t know what you disagree with me about. You keep telling me things I already know, implying you’re disagreeing with me about something. They have no intention of exposing any of it to developers. They specifically said it was entirely done in hardware so no OS nor developer implementation is needed. The entire mechanism is “transparent” to developers. The whole logic is probably done in the memory controller. BTW, there is no system RAM. When someone says system RAM, that is defined as the standard memory available to the CPU, also implying there is dedicated graphics memory, which there isn’t. In the case of the M-series of chips, the unified memory is used by both the CPU and GPU. The GPU (or CPU) makes RAM requests to the MMU and the MMU allocates and deallocates accordingly.

Commenting on our earlier conversation. Apple has now released a tech note explaining what Dynamic Caching is and how it works in great detail. I hope this will end the baseless speculation that this feature has anything to do with system DRAM. You can find the video here:

 
  • Like
Reactions: LanTao and ric22
No one has ever said that 16GB wouldn’t work better for many users. Obviously, the more the better. 64GB would work great for almost all users, but very few want to pay for that much. The question is whether 8GB is enough for a lot of users. Not all, but some. That ”some” has to be numerous enough for computer makers to sell 8GB as their base systems. The answer, according to the entire computer industry, is a resounding yes since everybody sells 8GB systems. If it weren’t, nobody would be selling 8GB computers, yet all of them do. Over time, 4GB base was phased out for 8GB. When the time comes, 8GB will be phased out, too, but that time apparently isn’t now. For that subset of users that get along fine with 8GB, they will buy the base units and will be quite happy with them.

For everyone else, we either have to buy configurations that start with more power and more RAM or we simply upgrade the base RAM. A significant problem I have with many people wanting free RAM is that they consider their needs are everyone else’s needs, too, and can’t imagine anyone needing less. The end result is people who don’t need that much having to pay for stuff they don’t need. I don’t trust Apple (or any other company) enough to think they would provide free RAM out of the goodness of their heart. Inevitably, and it’s been proven time and time again, when they boost base specs, they also boost the price.
Sure, lots of computers come with 8GB RAM, but then those computers don't cost $1,600 (USA) or up to $2,200 in some parts of the world. For that price you expect better than "sufficient", don't you?
 
Commenting on our earlier conversation. Apple has now released a tech note explaining what Dynamic Caching is and how it works in great detail. I hope this will end the baseless speculation that this feature has anything to do with system DRAM. You can find the video here:

Interesting read. This goes beyond what I had originally thought, but it actually does what I said it would do and then some. In traditional GPU’s, memory is allocated and kept track of by registers that maintain a list of memory pages allocated for that process. That can lead to two things: 1) the GPU bottlenecks when it runs out of registers to track memory pages, and 2) it is forced to allocate a larger amount of memory than would be needed for the immediate request. My theory address the second, while the first was not something mentioned at the event, but answers a question many of us had.

What dynamic caching does is to maintain a list of soft registers not in the hardware but in a file, allowing for virtually an unlimited number of memory pages to be allocated. If there were an unlimited number of hardware registers (obviously impossible), dynamic caching would not be necessary, but because there is a limited supply, the cache file allows for virtual registers. Virtual registers are created when needed and deleted when not needed anymore (a dynamic allocation/deallocation of registers). There are two benefits to this. It frees up actual registers for use elsewhere and it allows for a better percentage of cache hits, since there is a larger pool of registers to choose from. Essentially, in a normal work flow since only a limited number of pages can be allocated, the likelihood of a cache hit is lower. With a theoretical unlimited number of pages, hits will happen more often, eliminating the need to allocate additional memory and therefore speeding up the pipeline. This keeps the performance from bottlenecking, allowing for constant throughput. The big advantage of that is that it allows memory to be freed up when not needed, thus allowing more unified memory to be available for other processes because of the dynamic memory deallocation that can take place with this.

You know that animation Apple showed at the event where you saw several waves going up and down with another color filling up the empty spaces? The bottom of the wave represents the amount of memory being used by the GPU shaders while the different color above that is from other processes using the memory that was freed up. With the traditional method, the entire box is reserved for the shaders with all other processes shut out of that memory, the first box Apple showed that was mostly empty with a single spike. With dynamic caching and its ability to do real time memory allocation and deallocation, that freed up memory can be used by other processes, improving performance throughout the system, not just in the graphics.

The other two parts of the session talked about the new ray tracing and then mesh shading, also an interesting read.

Yes, it does have to do with memory allocation and deallocation, preventing unused memory from being hogged by the shaders. But it has the additional benefit of keeping the GPU pipeline going since the pipeline does not have to wait for a register to free up to allocate more pages in memory. Apple emphasized the first part at the event and only hinted at the second part, leading many to wonder what Apple was doing to facilitate higher GPU utilization. This answers that question.
 
No one has ever said that 16GB wouldn’t work better for many users. Obviously, the more the better. 64GB would work great for almost all users, but very few want to pay for that much. The question is whether 8GB is enough for a lot of users. Not all, but some. That ”some” has to be numerous enough for computer makers to sell 8GB as their base systems. The answer, according to the entire computer industry, is a resounding yes since everybody sells 8GB systems. If it weren’t, nobody would be selling 8GB computers, yet all of them do. Over time, 4GB base was phased out for 8GB. When the time comes, 8GB will be phased out, too, but that time apparently isn’t now. For that subset of users that get along fine with 8GB, they will buy the base units and will be quite happy with them.

For everyone else, we either have to buy configurations that start with more power and more RAM or we simply upgrade the base RAM. A significant problem I have with many people wanting free RAM is that they consider their needs are everyone else’s needs, too, and can’t imagine anyone needing less. The end result is people who don’t need that much having to pay for stuff they don’t need. I don’t trust Apple (or any other company) enough to think they would provide free RAM out of the goodness of their heart. Inevitably, and it’s been proven time and time again, when they boost base specs, they also boost the price.
There is little I disagree about your post albeit I still believe it would be in Apple's best interests to have a 16Gb base configuration at very little cost, and where I suspect that would be self financing by removing production of the 8Gb base and where its own aspirations are heading into more services including games, where 16Gb would not only assist, but could put multiple industry critics back on side, many complaining about 8Gb especially for a range with 'pro' in it. It won't stop others with greater needs buying more RAM, and the devices geared towards heavier usage already have a higher base configurations which Apple must consider a necessity.

I'm not posting solely for consumers, but for Apple also, and I'm sure they will come round to the 16Gb base at least on pro devices, but where then its probably cheaper across all the range currently on 8Gb because it streamlines production, removing the 8Gb from it, which will save money, and as they already have the set up for 16Gb unified on these devices would make sense from their perspective too, as its difficult to equate their intentions to increase attention on the lucrative gaming section if they stuck with the 8Gb base.

Increasingly Apple's revenues are bolstered by services, media creation, apps, and no doubt games are in their sights, as they've made that clear and as you can't update unified memory after purchase, Apple needs to ensure its range are fit for purpose not just based on now, but with an eye on near future requirements, as the last thing Apple want to do is piss off purchasers, if after just a year or two the 8Gb base is no longer fit for purpose, especially when some software and games producers already stipulate higher than 8Gb. Apple need to be in front of the market and reactive and 16Gb would go some way to ensure that and of course reassure customers, ironically where some may not need more than 8Gb but who are swayed by others.

Apples ease of use, and dependability along with innovation has seen it get to its current position, in no small part thanks to Steve and putting a 16Gb base sets it apart from WINTEL where 8Gb may still be a base, but where cost to upgrade is minimal compared with unified memory from Apple.
 
  • Like
Reactions: ric22
Interesting read. This goes beyond what I had originally thought, but it actually does what I said it would do and then some. In traditional GPU’s, memory is allocated and kept track of by registers that maintain a list of memory pages allocated for that process. That can lead to two things: 1) the GPU bottlenecks when it runs out of registers to track memory pages, and 2) it is forced to allocate a larger amount of memory than would be needed for the immediate request. My theory address the second, while the first was not something mentioned at the event, but answers a question many of us had.

You almost got it. The registers are not there to maintain a list of memory pages. The registers store the thread data. As I told you before, the GPU does not actually allocate system memory, it uses the memory given to it by the driver/OS.

On traditional architectures you have different types of thread storage: per-thread registers, per-simdgroup/wave registers (often called uniforms or "constants"), per-threadgroup/block shared memory (for cooperative work by multiple threads in the group), stack, as well as various caches. Most of these are usually implemented as different physical memory blocks on the GPU core and have different performance characteristics. What Apple did is unify all these memory blocks into one single shared pool of fast on-GPU memory and virtualised (as you correctly note) the individual storage types. So if in the old architecture a GPU register was backed by a concrete cell in a register file, assigned by the driver/firmware/scheduler when the kernel was queued, on Apple G16 a GPU register is backed by the unified "Dynamic Cache" storage. That's the gist of it.

Memory pages however are a completely different topic (accessing the system RAM or what Apple calls "device memory") and have nothing to do with the Dynamic Cache. As I mentioned before, Dynamic Cache is about allocation of on-GPU storage, not the DRAM.
 
  • Like
Reactions: ArkSingularity
You almost got it. The registers are not there to maintain a list of memory pages. The registers store the thread data. As I told you before, the GPU does not actually allocate system memory, it uses the memory given to it by the driver/OS.

On traditional architectures you have different types of thread storage: per-thread registers, per-simdgroup/wave registers (often called uniforms or "constants"), per-threadgroup/block shared memory (for cooperative work by multiple threads in the group), stack, as well as various caches. Most of these are usually implemented as different physical memory blocks on the GPU core and have different performance characteristics. What Apple did is unify all these memory blocks into one single shared pool of fast on-GPU memory and virtualised (as you correctly note) the individual storage types. So if in the old architecture a GPU register was backed by a concrete cell in a register file, assigned by the driver/firmware/scheduler when the kernel was queued, on Apple G16 a GPU register is backed by the unified "Dynamic Cache" storage. That's the gist of it.

Memory pages however are a completely different topic (accessing the system RAM or what Apple calls "device memory") and have nothing to do with the Dynamic Cache. As I mentioned before, Dynamic Cache is about allocation of on-GPU storage, not the DRAM.
I didn’t say the GPU allocates memory. It requests the memory, which is then supplied to it. You’ve almost got it. You got that unified memory means the CPU and GPU share the same memory, but somehow take that concept and miss its application. When the GPU is able to finish with its tasks quicker, allowing the memory to be released, that extra memory is freed up for the CPU to use because they share the same space. You would be correct only if the GPU had its own dedicated memory, which it does not. Any memory not used by the GPU is available for others to use. That is the point I am making. You are thinking too much of a traditional system where there is a separate system memory and dedicated GPU memory. No such thing exists, so what one uses, the other can’t. Therefore memory allocation and availability is definitely affected by dyanamic caching, so it has everything to do with available memory, not nothing as you claim.

I mentioned the two moving boxes Apple showed at the event. One showed a nearly empty box and some waves indicating GPU memory usage. Note the top was empty because the CPU was not allowed access to it, as it is reserved by the GPU. In the second box, the wave moves up and down rapidly as the memory is allocated and deallocated in real time. But you note the space above it is filled with another color. That’s the CPU taking advantage of the free memory that otherwise would be unused but allocated to the GPU. That is dynamic caching in action allowing more memory to be available to others. That’s the part you don’t get. You got one half of it with the dynamically allocated and deallocated registers, but you fail to get the part of the freed memory that isn’t reserved by the GPU.
 
Let me guess. You’ve never owned an 8GB Apple Silicon Mac.
Incorrect. I have a dozen users on 13” MBA w/ 8g. They’re great. We even created a KB article about M1 & 8g memory pressure for the complainers.

Windows machines w/ 8g are a joke.
 
I didn’t say the GPU allocates memory. It requests the memory, which is then supplied to it. You’ve almost got it. You got that unified memory means the CPU and GPU share the same memory, but somehow take that concept and miss its application. When the GPU is able to finish with its tasks quicker, allowing the memory to be released, that extra memory is freed up for the CPU to use because they share the same space. You would be correct only if the GPU had its own dedicated memory, which it does not. Any memory not used by the GPU is available for others to use. That is the point I am making. You are thinking too much of a traditional system where there is a separate system memory and dedicated GPU memory. No such thing exists, so what one uses, the other can’t. Therefore memory allocation and availability is definitely affected by dyanamic caching, so it has everything to do with available memory, not nothing as you claim.

You clearly don't want to learn, and I have no interest in continuing this conversation. I tried multiple time to explain to you the difference between on-GPU memory (only accessible by the GPU core) and the system's unified memory (shared by CPU/CPU/the rest of the processors), but you are not listening.

I mentioned the two moving boxes Apple showed at the event. One showed a nearly empty box and some waves indicating GPU memory usage. Note the top was empty because the CPU was not allowed access to it, as it is reserved by the GPU. In the second box, the wave moves up and down rapidly as the memory is allocated and deallocated in real time. But you note the space above it is filled with another color. That’s the CPU taking advantage of the free memory that otherwise would be unused but allocated to the GPU. That is dynamic caching in action allowing more memory to be available to others.

The graph describes the allocation of GPU-internal memory, not the system memory.
 
You almost got it. The registers are not there to maintain a list of memory pages. The registers store the thread data. As I told you before, the GPU does not actually allocate system memory, it uses the memory given to it by the driver/OS.

On traditional architectures you have different types of thread storage: per-thread registers, per-simdgroup/wave registers (often called uniforms or "constants"), per-threadgroup/block shared memory (for cooperative work by multiple threads in the group), stack, as well as various caches. Most of these are usually implemented as different physical memory blocks on the GPU core and have different performance characteristics. What Apple did is unify all these memory blocks into one single shared pool of fast on-GPU memory and virtualised (as you correctly note) the individual storage types. So if in the old architecture a GPU register was backed by a concrete cell in a register file, assigned by the driver/firmware/scheduler when the kernel was queued, on Apple G16 a GPU register is backed by the unified "Dynamic Cache" storage. That's the gist of it.

Memory pages however are a completely different topic (accessing the system RAM or what Apple calls "device memory") and have nothing to do with the Dynamic Cache. As I mentioned before, Dynamic Cache is about allocation of on-GPU storage, not the DRAM.
Here’s a direct transcript from the October event from Srouji:

———

It starts with a new microarchitecture that has a breakthrough feature we call Dynamic Caching, an industry first. In a traditional graphics architecture, software determines the amount of local GPU memory that’s allocated to upcoming tasks at compile time.

This results in reserving the same amount of memory for every task based on the needs of the single most demanding task, which means the GPU is under utilized especially with complex programs.

In our next-generation GPU, local memory gets dynamically allocated in hardware in real time. SO ONLY THE EXACT AMOUNT OF MEMORY THAT IS NEEDED IS USED FOR EACH TASK (emphasis mine).

———-

That last sentence is what I’m getting at. Because the system does not pre-allocate excess memory ahead of time, less memory is needed by the GPU, because only the exact amount needed is used at any given time, rather than the maximum amount needed being reserved until the task is done. That old method of pre-allocation takes away memory from the CPU that sits there mostly unused by the GPU but is inaccessible to anyone else. This is what I mean when I say it is definitely all about freeing up memory for others to use.

This is straight from Johny Srouji’s mouth.

What was missing from the presentation was how they maximize GPU utilization, which is what the caching of the virtual registers does by keeping the pipeline moving without bottlenecks. That talk you posted answered that one question that everyone had after watching the presentation.
 
Here’s a direct transcript from the October event from Srouji:

———

It starts with a new microarchitecture that has a breakthrough feature we call Dynamic Caching, an industry first. In a traditional graphics architecture, software determines the amount of local GPU memory that’s allocated to upcoming tasks at compile time.

This results in reserving the same amount of memory for every task based on the needs of the single most demanding task, which means the GPU is under utilized especially with complex programs.

In our next-generation GPU, local memory gets dynamically allocated in hardware in real time. SO ONLY THE EXACT AMOUNT OF MEMORY THAT IS NEEDED IS USED FOR EACH TASK (emphasis mine).

———-

That last sentence is what I’m getting at. Because the system does not pre-allocate excess memory ahead of time, less memory is needed by the GPU, because only the exact amount needed is used at any given time, rather than the maximum amount needed being reserved until the task is done. That old method of pre-allocation takes away memory from the CPU that sits there mostly unused by the GPU but is inaccessible to anyone else. This is what I mean when I say it is definitely all about freeing up memory for others to use.

This is straight from Johny Srouji’s mouth.

What was missing from the presentation was how they maximize GPU utilization, which is what the caching of the virtual registers does by keeping the pipeline moving without bottlenecks. That talk you posted answered that one question that everyone had after watching the presentation.

For the last time: you do not understand the difference between the private on-GPU memory (what Apple refers to as "shader memory") and the shared system DRAM (what Apple refers to as "device memory). Srouji is talking about the on-GPU memory, as is made abundantly clear in the new tech note videos (they even give you easily digestible diagrams explaining the memory hierarchy and the different memory types), and which was already previously clear to anyone who has an even slightest understanding how GPUs work. You can twist and misinterpret his words all you want, that's your failing, not Srouji's.
 
Does anyone have one of the modern machines? How much RAM does the system take up and how much is generally left for user apps?
I’m not an intensive user and I’m wondering if I could get by with the 8GB standard
I have a work colleague that has the 13" M1 pro with 8GB (not his choice, he got it from a friend) and the 8GB is his biggest gripe apparently Chome with with a few tabs open kills it. I have a 16GB 14" M1 Pro and and it runs flawlessly. This video basically outlines the difference well:
 
I wonder if the 8Gb if subject to swapping introduces potential problems for the SSD in terms of performance/longevity? I am not suggesting it does, I just haven't tested it, and unified memory via the M1 is comparatively recent.
 
First AAPL robs you then they get their PR dept to gaslight us with bogus "8GB on Macs is the same as 16GB on PCs". You can fool some of the people some of the time.
8mb ram and 3300mb/s reads in 2023.. that's PC specs from pcie 3.0 PCs circa 2014....

None of this is impressive. They are hitting 105+ degrees on the SOC crippling the potential of the chip because of singe fan cooling... and they aren't even near high end discreet gpu speeds yet. What a disaster moving to SOC for their desktop/laptop processors. This is all to drive profit and make people want to buy new again in 2-3 years. No more having to outsource, everything is glued on components, no upgrades... disposable computing... No longer can you trust that you are getting a quality computer in a pre-built box.. You have to worry if you are getting enough ram to make the cpu run efficiently. Whether you are getting dual NAND storage or single that cripples data speeds. This is absolutely horrible and if anyone is defending this Tim Cook apple, they should be ashamed of themselves. This is all going back to John Scully mentality... a bazillion configurations and trying to cheat the customer to make a buck. frigging gross.

 
After a bit of research it seems quite possible that 8Gb of unified memory will impose future problems for the SSD, especially on base configurations. Its comparatively early days for Apple silicon and time will tell if the SSD's fail earlier than expected. Less RAM especially if Apple has its target set on Games and where many software producers already spec more than 8Gb. may result in SSD failures earlier so a good reason to have a 16Gb base soon.

Quite some ago an article in Macworld expressed concern at the SSD on M1: "Recent reports have shown that some users of M1 Macs are experiencing what they feel is unreasonable, excessive usage of the SSD. One in particular showed 15TB written in two months. That’s quite a bit, and almost certainly due to swapping main memory to the SSD."
 
  • Like
Reactions: eltoslightfoot
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.