Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The Newton was a failure. The product line was killed 15 years before development on the iPhone even started. The only thing the Newton did that may have contributed to the iPhone's development was show other manufacturers such as Palm and Compaq that a PDA might be a saleable device, and the devices that were created by those companies in turn helped shape Apple's design direction on the iPhone. Inkwell was about the only Newton technology that even survived past the year 2000 (being added to Jaguar), but even that technology really didn't contribute much to the iPhone's development outside of generating industry speculation that Apple was working on a new Newton-inspired PDA of their own. As Blackberry had already proven by the time Apple started on the iPhone, even great handwriting recognition will lose out to mediocre keyboard input.
Newton stopped shipping in 1998 and iphone development began in 2004, so the gap between them was about 6 years. Regardless, I don't know how much influence the Newton had on the iphone.
 
  • Like
Reactions: Cape Dave
We disagree. IMO the AVP is not "a failure due to low interest and sales." IMO the AVP is a superb tech demo and v1 tech product. Repeat: superb. And yes, I think 100,000 x $3,500 is a lot of sales. At least 99% (and probably 100%) of the tech firms in the world will be happy selling 100,000 x $3,500.

But the important thing about the AVP is what it demonstrates, and demonstrates well. The AVP demos a new tech direction using great tech hardware that is just waiting for software to catch up.
The AVP demonstrated that the technology is still too primitive for an AR/VR revolution. Like its competitors, the AVP is a heavy cumbersome device with a limited battery life. It's good for specific applications, at specific locations and for a limited time. You don't want to use it everywhere all day.

Meta "AI glasses" are a more interesting product. At 50–70 grams, they are at the upper limit of what people are willing to wear all day. Meta promises "up to 8 hours" of battery life for the latest generation, which is good enough for a special-purpose device. But you would really want at least 16 hours of guaranteed battery life to actually use them in your daily life. Then add AVP-level functionality, and you'll have a proper AR/VR lifestyle device.
 
  • Like
Reactions: 1d1otic
ECC RAM offered no advantage to anyone running workloads outside of high-precision scientific or engineering realms
I agree with everything else you wrote, but not with this. ECC should be the standard in every device, from the Apple Watch to the Mac Pro. RAM is an essential component of every computer, and you want to know when it starts going bad. For example you want to know whether that random reboot last Tuesday was a software glitch or a hardware problem.
Why do you justify degeneration of Mac specs for? Apple used to have a workstation for professional uses but now, Mac only offers up to middle range specs which is a joke.
The joke is to expect Apple, a company that caters to consumers, to invest a lot of money into chasing high-end markets that have either mostly disappeared (workstations, as most people get by with a powerful laptop) or that they never were successful in (servers). AMD can afford to make Threadripper for workstations because it re-uses the technology from their lucrative Epyc.
 
I agree with everything else you wrote, but not with this. ECC should be the standard in every device, from the Apple Watch to the Mac Pro. RAM is an essential component of every computer, and you want to know when it starts going bad. For example you want to know whether that random reboot last Tuesday was a software glitch or a hardware problem.
For workstation use, detecting failing RAM is not the major advantage that ECC provides. Bad RAM is diagnosed in ECC and Non-ECC systems the same way - through UEFI or Software diagnostic tools. There are two advantages ECC might provide in very occasional cases where such a reboot is caused by a flipped-bit: 1) it would self-correct that bit and the system would continue on without rebooting, and 2) it would be able to log that error and, if someone is checking logs regularly and sees it happening more frequently, you might be able to get an early warning of a RAM issue. In practice, however, such early detection due to ECC error logging almost never happens on workstations, and it absolutely never happens on home computers, phones, or watches. (also note, many Non-ECC systems can also detect and log such errors, but they don't correct them, so this advantage is somewhat mitigated).

The kind of memory errors that ECC RAM is designed to correct (single bit-flips caused by cosmic radiation or electromagnetic interference) are relatively rare occurences in general, and errors affecting working RAM are even more rare. On a typical home computer or laptop, a bit-flip might cause some effect in software less than a dozen times a year if that machine were left on all day every day, and in most cases, that effect would be unnoticeable. It might manifest itself as an incorrect colour in one or a small group of pixels on a photo or a frame of video during playback, might change an 8 to a 9 on a spreadsheet. On a very, very rare occasion, it might affect running code and cause a program to crash.

On a typical Professional Mac workload (Photoshop, Audio/Video editing, desktop publishing), bit-flip errors would affect very little, if any, of a user's workflow to the point where the cost of memory and the reduction of performance due to ECC overhead is worthwhile.

Now, on a CAD workstation or a workstation used for high-precision engineering applications, a handful of errors causing incorrect data is unacceptable, and such errors are more frequent due to RAM size and amount of working RAM in such workstations, so ECC RAM is used in these applications. Servers are spec'd with ECC RAM because almost all server RAM is working memory and there is a lot, so the frequency of errors is higher, and uptime and data integrity is vital. Outside of these areas, however, ECC RAM offers very little benefit.
 
I agree with everything else you wrote, but not with this. ECC should be the standard in every device, from the Apple Watch to the Mac Pro. RAM is an essential component of every computer, and you want to know when it starts going bad. For example you want to know whether that random reboot last Tuesday was a software glitch or a hardware problem.

I wouldn't be surprised if Apple already uses some form of ECC in their RAM. I wouldn't know how to verify it.

 
  • Love
Reactions: Cape Dave
It’s clearly only a stopgap/last gasp. The Mac Pro housing was ludicrous enough for the 2019 Intel Mac - keeping it for the 2023 which doesn’t need to run Intel/AMD space heaters is obviously a compromise.
When they rolled out the new Mac Pro using apple silicon I was shocked that they used the same enclosure. On one level I understand the logic in using an existing enclosure it really highlighted the deficiencies of this $6,000 - $10,000+ computer. With Apple Silicon and most everything already soldered onto the logic board the size, cooling, and expansion bays were useless in the mac pro.

I love the design of the case, and I was looking for a knock off to build my own pc for years, so its not like I have anything against the design - its fantastic.
 
Outside of these areas, however, ECC RAM offers very little benefit.

On a typical home computer or laptop, a bit-flip might cause some effect in software less than a dozen times a year if that machine were left on all day every day, and in most cases, that effect would be unnoticeable.
I strongly disagree. Without ECC you do not know how often your processor is reading back data with a flipped bit. In a healthy system it might not be very frequently, but ECC will let you know when your RAM starts going bad. And I don't see why my online banking is less important than some scientific calculations. And rowhammer exists. Why is data integrity so unimportant to you?
 
  • Like
Reactions: 1d1otic
I wouldn't be surprised if Apple already uses some form of ECC in their RAM. I wouldn't know how to verify it.
All DDR5 is so cheaply made that it requires on-die ECC to even have a chance of not corrupting data, however that is not a replacement for traditional end-to-end ECC.
 
I strongly disagree. Without ECC you do not know how often your processor is reading back data with a flipped bit.
"Data" in this context includes program code, addresses, encryption keys & checksums, keys in databases, complex file formats where a flipped bit will make the file unreadable, so any significant rate of bit-flipping - e.g. due to a memory chip going bad - would cause a string of otherwise inexplicable errors and crashes.

And I don't see why my online banking is less important than some scientific calculations.
It's a simple question of scale & risk vs. cost.

"Your" online banking - involving a few kilobytes of data processed in a fraction of a second - is vanishingly unlikely to be hit with a memory error on your computer - let alone one that subtly and silently changes your bank balance. The servers at the bank are dealing with millions of times more data per second, and are therefore a million times more likely to be hit with an error - so a "vanishingly unlikely" occurrence for you can become a daily occurrence for a large data-centre. Nobody is suggesting that the servers at the bank should skimp on ECC - although one would hope that there is a lot of other multiply-redundant integrity checking going on.

"some scientific calculations" may involve terabyte data sets or runs that last hours or days - again turning a vanishingly small error rate into a regular occurrence. Again, the results need to be thoroughly checked - errors should either get spotted or be within tolerances - so the case for ECC is mainly the time & money wasted if a job fails and has to be re-run.

ECC requires, what, 25% (?) extra RAM, extra implementation costs, and can slow things down. I believe that LPDDR relies on in-band check bits, which reduces memory bandwidth and takes a bite out of your available memory space (which is already somewhat limited on Apple Silicon). If you don't need ECC, you don't want it it. You definitely don't want it in your watch, phone, tablet or MacBook Air. In a personal workstation... I'm going to guess "no" but you'd have to do the cost vs. risk analysis for your own personal workflow (and that would have to involve finding the error rate for on-package LPDDR5X, which you really can't assume is the same as bog-standard DDR5 DIMMS - maybe greater, maybe less).

...but ECC is mainly essential for datacentre-scale applications where cost-of-downtime is significant. Apple don't have a horse in that race, and it's not what the current Apple Silicon range is good for.
 
  • Like
Reactions: jakey rolling
Ai is the future and a lot of companies are training their OWN AI and models. Besides, since Apple already made Apple Intelligence, it's inevitable that Apple needs to create their own. Dont forget that Apple is also researching toward AI itself.


It's more important to have dedicated chips such as NPU.
But you don’t need to develop your own chip in order to develop and train an AI.
 
All DDR5 is so cheaply made that it requires on-die ECC to even have a chance of not corrupting data, however that is not a replacement for traditional end-to-end ECC.

This has nothing to do with being “cheaply” made. It’s the consequence of having to meet the design spec. You found skip ECC by making the RAM slower or less dense, or both, but I doubt that the resulting “high-quality” DDR would have made you happier.

For the current context the question boils down to the following:

- Does Apple use any form of link-ECC either for detection or correction? If yes, how does it look like?

- If they don’t use it, does Apple Silicon even need link-ECC given placement of RAM close to the Soc?

I believe there are two or three users on this forum possessing the expert knowledge to discuss these things. These users also have been wise enough to avoid this thread so far.
 
I don't see why my online banking is less important than some scientific calculations.
Your online banking is not being done on your local device. It is being done on a remote server with massive redundancy and enterprise-level ECC RAM.

And rowhammer exists. Why is data integrity so unimportant to you?
Because the issue is and has always been overblown. Even with rowhammer. If it were as big an issue as you are making it out to be, no consumer computers would work, ever. Your car likely doesn't have a rocket-propelled ejection seat with a parachute - why is automotive safety unimportant to you.
 
I wouldn't be surprised if Apple already uses some form of ECC in their RAM. I wouldn't know how to verify it.
Considering how much Apple highlighted the use of Xeon processors and ECC RAM in the Mac Pro, the fact that there was never ECC RAM in any other Intel or PowerPC Apple computer in history, and the fact that Apple doesn't say so in any of its literature now, I'd be incredibly surprised if there was.
 
  • Like
Reactions: Basic75
This has nothing to do with being “cheaply” made.
It's due to smaller structures, meaning smaller capacitors in the DRAM cells which are more error prone. And all of this is done to drive down costs, i.e. to make it cheaper. What reason do you think there is?
 
ECC requires, what, 25% (?) extra RAM, extra implementation costs, and can slow things down.
Traditionally it's 12.5% when you have 72bit wide RAM instead of 64bit. "Slow things down" is not a good counterargument to reliability. Do you dislike journaling file systems? And before you say that filesystem corruption is more frequent than bit errors in RAM, first, without ECC you don't know how frequent it is, and, second, as I already wrote, one role of ECC is as an early-warning system for about-to-fail RAM chips.
 
It's due to smaller structures, meaning smaller capacitors in the DRAM cells which are more error prone. And all of this is done to drive down costs, i.e. to make it cheaper. What reason do you think there is?


Faster RAM, higher capacities. You can of course see it as driving down the cost, but that won’t be a very common use of the notion. Are we making faster CPUs to drive down the compute costs? Hardly.
 
Traditionally it's 12.5% when you have 72bit wide RAM instead of 64bit. "Slow things down" is not a good counterargument to reliability. Do you dislike journaling file systems? And before you say that filesystem corruption is more frequent than bit errors in RAM, first, without ECC you don't know how frequent it is, and, second, as I already wrote, one role of ECC is as an early-warning system for about-to-fail RAM chips.

How would you know that Apple is not detecting and reporting errors already? Again, there had relevant patents for years. Of course, this doesn’t mean anything - we simply have no way of knowing unless they tell us.

BTW, I don’t see the need for link error correction - just detection should suffice. The controller can discard a faulty data packet and request it again if needed. This should reduce the amount of overhead and help alleviate performance worries.
 
But you don’t need to develop your own chip in order to develop and train an AI.
What makes you think that way especially since Apple is NOT using Nvidia graphic cards? Apple has their own AI and workflow which isn't great to use others to train and study just like others such as Google, Meta, Microsoft, Amazon, Tesla, and more. This is why they made their own chips or NPU for AI since graphic cards itself is extremely poor on AI purposes in real life. It's just Nvidia's ecosystem is not yet replaceable but for those big tech companies, they already have their own workflow and therefore, they need their own chips.
 
Faster RAM, higher capacities. You can of course see it as driving down the cost, but that won’t be a very common use of the notion. Are we making faster CPUs to drive down the compute costs? Hardly.
I don't understand the jump from me mentioning smaller DRAM structures to you talking about faster CPUs.
 
  • Haha
Reactions: jakey rolling
How would you know that Apple is not detecting and reporting errors already? Again, there had relevant patents for years. Of course, this doesn’t mean anything - we simply have no way of knowing unless they tell us.
Someone already answered this question, if Apple had such a useful feature they'd surely use it in their marketing.
BTW, I don’t see the need for link error correction - just detection should suffice. The controller can discard a faulty data packet and request it again if needed. This should reduce the amount of overhead and help alleviate performance worries.
Detection would be a good step forwar, though requesting again only works if the problem was in-flight and not while reading the RAM cell.
 
I don't understand the jump from me mentioning smaller DRAM structures to you talking about faster CPUs.

Because it’s the same thing according to the logic you appear to employ. Smaller RAM structures = more RAM per $, faster CPU = more compute per $. Of course, only few look at it like that.

We have DDR5 because we need more RAM and we want it to be faster, not because we want to save money.
 
Someone already answered this question, if Apple had such a useful feature they'd surely use it in their marketing.

They also have other useful features like surge-protected USB ports, per-port MMUs, memory controller QoS, and M5 GPU just doubled the integer multiplication throughout - but you won’t find any of these things on the marketing sheet. Not a strong argument IMO.
 
  • Like
Reactions: OptimusGrime
They also have other useful features like surge-protected USB ports, per-port MMUs, memory controller QoS, and M5 GPU just doubled the integer multiplication throughout - but you won’t find any of these things on the marketing sheet. Not a strong argument IMO.
Yes very true. Also worth mentioning that they integrate Memory Signal Processors on their ssds, which they gained when they purchased the company “Anobit”. These increase durability greatly when compared to similar nand. So for example, TLC can have the durability of MLC, QLC can have the durability of TLC etc. Very cool. Never mentioned in marketing at all.

Anyway, back to the discussion and sorry for the interruption.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.