jhu said:
interesting. so it's only mac os x that has issues with memory quality?
Memory quality is important on any computer... but Apple started forcing quality standards before Mac OS X because most of the crashing issues of Mac OS 7/8/9 could be linked to poor memory quality (which, in turn, reflects poorly on Apple). It was a firmware update while in Mac OS 9 that made some Macs stop recognizing poor quality memory.
While Mac OS 7/8/9 might crash, the problems that came from bad memory on Unix systems could be even worse.
And we are talking about any of the systems from NeXTSTEP 0.8 up to Mac OS X 10.4.x. In the case of NEXTSTEP/OPENSTEP/Rhapsody, we are talking about Mach plus 4.3/4.4BSD. Darwin was Apple's first steps into using 4.4BSD Lite (via elements from FreeBSD).
But by no means is this restricted to Mac OS X systems. I had a client last week with a 7300 that was crashing constantly. It turned out to be one 64 MB DIMM that was causing all the issues. And pulling it made the system solid again.
how about linux and other unix-like oses? i would guess no since cheap memory seems to work on my non-mac machine.
I have no idea about Linux... I don't use it. But are we talking about a
pre-2000 non-Mac system? You quoted me discussing Macs from 1995 to 1999... are you running Linux on a 1995-99 era PC with cheap memory from that period?
For Unix systems memory quality has always been (historically) very important. Suns, SGIs and NeXT systems usually required a higher grade of memory than your average PC would use.
But then again, when you pay $800+ for an operating system (which was the average price for a Unix OS through the late 90's), you weren't going to risk your system on questionable memory quality (specially considering the thousands you most likely spent on the hardware to begin with).
I've seen plenty of Unix systems be brought to their knees because of memory issues... and that would usually be the first thing to check too. When a rock solid Unix system starts acting flaky, one of the first places to look is hardware.
Besides, memory quality (on the whole) these days doesn't vary as widely as it did in the 90's. And
all memory today is cheap compared to the prices we once had to pay.
But yes, Unix venders who's reputations were made or broken on the quality of their hardware were
very picky about what memory would go into their systems.
Reputations aren't made or broken for Open Source Unix-like systems on PCs in the same way. I doubt that Linux or BSD are going to lose revenue if they don't perform to the same standards on todays cheap PCs as Unix venders had to back in the 90's.
I don't think you can compare your 21st century systems with pre turn of the century hardware. Slower systems, poorer manufacturing quality across everything, and dramatically higher prices for anything computer related meant that you
hunted for quality if you wanted stability... today everything is faster than people should need and it is all a bargain.
But I didn't see Linux in the early to mid 90's as being that much better than, say, Windows NT on the same hardware. On great hardware both were very solid, on cheap hardware... you got what you paid for.
