Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Wow, those original iMacs were a real bang for the buck!

I thought space was much colder than -55
 
Couldn't you fit multiple modern processors in the same or less space though? Ridiculously more computing power (even just using say M1 efficiency cores) with backup chips on board. When sending something to Mars I assume every ounce and every millimeter and every watt maters, I'm a little surprised they're using chips this old and this inefficient. I get that they have to be reliable, but is it really impossible to put together a reliable modern chip?
Not sure if someone mentioned it already. But in the space the radiaton is much stronger, these strong energy waves can provide eneugh energy for electrons to make level shift in transistor. Level shifts in CPU (0bit to 1bit) during exploring space exploration is probably the last thing you want. So provide backups CPUs is not a solution when your CPU work but provides you wrong data. Possible solution could be using more CPUs for the same operation but place them all over the machine. Which probably bring question if its efficient and worth of investing time. Plus there is downside that old CPUs reliability is proven by years of testing.

But to be more exact. One of the problem is density on silicon. In this case 1 energy strike can interrupt more transisors at once, other problem could be in high clock speeds where 1 energy strike can affect more consecutive clock cycles. That can affect every consecutive calculation and at the end of the day your Rover could smash surface of Jupiter instead of flawless landing on mars :D

Possible of the solutions is using "voting" system where 1 operation is processed by more circuits and majority must match to confirm that operation was correct. This one was easy to explain but there is more of that from hardware perspective, even changing wafer material and ideally they are being used all together.

This is whole category of science making things work reliably in harsh enviroment. The process I metioned is called Radiation Hardening :)

Some sources if you are interested:
https://radhome.gsfc.nasa.gov/radhome/papers/radecs05_sc.pdf
https://nepp.nasa.gov/docuploads/172fdfb5-9d30-43e7-8a823f13e094d977/linspace-00.pdf
 
Space X's Starlink internet satellite system has the luxury of operating in low earth orbit, where threats from solar/cosmic radiation are relatively low. Also, their satellites and system rely on redundancy and are relatively inexpensive as is their cost to launch, means that their expected lifetime does not have to be as long as other space-based systems.
Uh, SpaceX boosters have an operational history measured in years - they're in use for far longer than the few minutes that boosters from old space operate.

SpaceX's Crew Dragon can operate in space for 210 days, 10x longer than the Space Shuttle could.

They're handling the Starship vehicle that should travel to the moon, Mars, and beyond the same as their past vehicles. There's no excuses for the absurdly high prices for decades old technology that we see from Old Space.
 
Uh, SpaceX boosters have an operational history measured in years - they're in use for far longer than the few minutes that boosters from old space operate.

SpaceX's Crew Dragon can operate in space for 210 days, 10x longer than the Space Shuttle could.

They're handling the Starship vehicle that should travel to the moon, Mars, and beyond the same as their past vehicles. There's no excuses for the absurdly high prices for decades old technology that we see from Old Space.
Uh... your post in response to mine makes no sense. I wasn’t commenting on what you refer as “Old Space.” Rather it was why Starlink satellite tech does not need radiation hardened semiconductors. Any space assets operating beyond LEO does, unless shielding is employed; which drives other costs.
 
  • Like
Reactions: Captain Trips
Why so old processor?
A major part of the reason that NASA projects, in particular, use such comparatively old processors is the development time for an interplanetary mission. It can take 10 to 20 years to get some missions off the ground (literally), and changing the processor to a new one requires going through recertification, additional rounds of testing and software revisions, and delays. So they just stick with the same processor from the start of development work to the actual launch. With manned vehicles, it’s even more trouble to change computing elements, since the same vehicle class (or, in the case of the Space Shuttle, the same vehicles themselves) may be in use for 10 to 30 years and would require recertification (for human flight even, which takes longer and is more rigorous) and additional testing.
 
technology is not NASA's strong point lol. compared to earth we would be more vulnerable. i guess in space, its also far less likely you'd wanna hack a rover.
 
Last edited:
Uh... your post in response to mine makes no sense. I wasn’t commenting on what you refer as “Old Space.” Rather it was why Starlink satellite tech does not need radiation hardened semiconductors. Any space assets operating beyond LEO does, unless shielding is employed; which drives other costs.
You're the one who veered the conversation towards Starlink though. I was talking about SpaceX's entire portfolio of hardware, most of which aren't limited to operating in LEO, and most of which are in use for longer than most of the devices from Old Space (aside from the odd interplanetary mission that gets launched every few years).

Specialized "radiation hardened" hardware is unnecessary. SpaceX accomplishes radiation hardening of ordinary commercial hardware through software - have the same program running on multiple cheap computers, and look for consensus. When one occasionally returns a different answer, consider it damaged and stop listening to it. Hardware failures aren't unique to space - this same practice is done all the time in datacenters on earth, and the practice scales up well and for much cheaper (and with much more up to date hardware) than using 23 year old CPUs at prices many thousands of times higher than they should be.
 
You're the one who veered the conversation towards Starlink though. I was talking about SpaceX's entire portfolio of hardware, most of which aren't limited to operating in LEO, and most of which are in use for longer than most of the devices from Old Space (aside from the odd interplanetary mission that gets launched every few years).

Specialized "radiation hardened" hardware is unnecessary. SpaceX accomplishes radiation hardening of ordinary commercial hardware through software - have the same program running on multiple cheap computers, and look for consensus. When one occasionally returns a different answer, consider it damaged and stop listening to it. Hardware failures aren't unique to space - this same practice is done all the time in datacenters on earth, and the practice scales up well and for much cheaper (and with much more up to date hardware) than using 23 year old CPUs at prices many thousands of times higher than they should be.

"You're the one who veered the conversation towards Starlink though"

Correct, as an example of how SpaceX achieves being unburdened by radiation hardening using very low-cost satellites in low orbits, with expected short lifetimes (due to atmospheric drag), massive redundancy if one should fail, and almost no cost-to-launch by piggybacking onto their reusable launch vehicles with *paying* customers inserting their assets into space.
 
Specialized "radiation hardened" hardware is unnecessary. SpaceX accomplishes radiation hardening of ordinary commercial hardware through software - have the same program running on multiple cheap computers, and look for consensus. When one occasionally returns a different answer, consider it damaged and stop listening to it. Hardware failures aren't unique to space - this same practice is done all the time in data centers on earth, and the practice scales up well and for much cheaper (and with much more up to date hardware) than using 23 year old CPUs at prices many thousands of times higher than they should be.
As many pointed out there are huge differences in need for radiation hardening in longer space missions relative shorter LEO missions. Radiation hardened cpus can stand up to many orders of magnitude the radiation normal components can. For missions in LEO or lasting days or months it may not be that important. But missions to Mars and beyond take years and may last for over a decade. Missions outside LEO, or even extremer, close to the Sun or Jupiter, are over time exposed to very high radiation levels.

Risk of radiation damage in space can be divided into two different types, high intensity events and long term accumulated dose. The former usually is constant over time, like avoiding damage from a short term solar flare. The latter have a low risk for a long time in the beginning (how long is determined by the level of hardening), but increases rather rapidly at a later time. For a decade long mission you must be sure your electronics can take the long term effects. $200.000 is a very low price to pay for that.

Having several redundant processors is of course good for redundancy within the expected lifetime of the CPU in the present environment. But having 3-5 backup CPUs do little to help if the expected time to failure due to radiation is 1 year, and the mission is expected to last 5-10 years. Many spacecrafts also pack more than one (radiation hardened) CPU for redundancy.

A hardened 750 can withstand a radiation dose of about 1 Mrad (1000 times the letal dose for a human) , The Juno Jupiter expedition used that processor but the radiation dose was expected to be 20 Mrad over time, so the CPU had to be further hardened by an extra titanium encasement.

The RAD750 is used in over 150 space missions, most in earth orbit but also in the Deep Impact comet chaser, the Mars Reconnaissance, the Kepler Space telescope, WISE, the Solar Dynamics observatory, Juno, the Curiosity Rover and of course the Perseverance rover. So it is a very well proven CPU with an excellent track record, which counts a lot in this missions.
 
Last edited:
The PowerPC 750 processor was ahead of the game for its time, featuring a single-core, 233MHz processor, 6 million transistors (compared to today's 16 billion in a single chip), and based on 32-bit architecture.

The original Macs used Motorola 68xxx chips. It wasn't until 1994 that the transition to the PowerPC family was introduced.
Apple designed a 68 K emulator that was included with every copy of Mac OS. This meant these new Macs could run almost all older 68 K software seamlessly (albeit with some speed penalties), allowing a smooth transition to PowerPC. (A brief history of Mac CPUs)
Apple Macs have transitioned CPU architecture twice already and now are doing a third with the transition to Apple Silicon. Each of the two previous transitions went pretty smoothly, so Apple SHOULD have sufficient experience to make the transition to M1 smooth as well. It's unfortunate that we've seen so many glitches with this transition. I doubt Jobs would have been happy.
 
Specialized "radiation hardened" hardware is unnecessary. SpaceX accomplishes radiation hardening of ordinary commercial hardware through software - have the same program running on multiple cheap computers, and look for consensus. When one occasionally returns a different answer, consider it damaged and stop listening to it.
It's clear you do not understand radiation hardening or only understand it superficially. What you are descriving only works for low-criticality systems and for those in low-dose environments.

If you are going to fly in GEO or to Jupiter you cannot solve the problem with redundancy. All your redundant COTS systems will get dosed equally and will all die around the same time, likely before your mission starts anything useful.
 
It's clear you do not understand radiation hardening or only understand it superficially. What you are descriving only works for low-criticality systems and for those in low-dose environments.

If you are going to fly in GEO or to Jupiter you cannot solve the problem with redundancy. All your redundant COTS systems will get dosed equally and will all die around the same time, likely before your mission starts anything useful.

What I describe will hold up totally fine for all Single Event Effects - individual bits getting flipped for individual calculations.

It won't deal as well with circuits getting damaged. I'm not sure how much this is a problem - how often does this actually happen, and how many are actually consequential? I suspect there's a lot of damage that could happen that would never matter for most operations.
 
What I describe will hold up totally fine for all Single Event Effects - individual bits getting flipped for individual calculations.

It won't deal as well with circuits getting damaged. I'm not sure how much this is a problem - how often does this actually happen, and how many are actually consequential? I suspect there's a lot of damage that could happen that would never matter for most operations.
It's totally application dependent. A cubesat intended to survive a couple years in LEO is *probably* OK with random off the shelf junk because LEO is very, very benign. For some applications that's totally fine.

All those comm sats in GEO and GPS sats in MEO would be dead in a month with fully off the shelf builds. Seriously. The dose is a hundred times worse. All it takes is one voltage regulator or CMOS switch (as an example of potentially vulnerable parts) to degrade and you may be done, and we can't afford a bunch of dead GPS satellites.

BTW single events can also be instantly destructive - its not just bit flips, so no, it won't hold up for ALL SEE. Redundancy does help there since these are random events, but that may not be enough.

Starlink is a reasonable place to make the tradeoff and fly cheap stuff. Perseverance, Europa Clipper, or GPS would not be.
 
“...and comes with an added $200,000 price tag.”

Behind closed doors at Apple HQ they are currently working on pricing for the new M1x line.

They stop and wonder for a second if the people are ready for a 6 figure Mac.

Thanks NASA.
The only connection to this chip with Apple is that they used to use it in the I-Macs. The chip is produced by IBM and Motorola and was supplied to Apple and several other companies. As I understand it, the temperature and Radiation shielding which ups the cost significantly is done by NASA or their contracted third party which may well be IBM.
 
. As I understand it, the temperature and Radiation shielding which ups the cost significantly is done by NASA or their contracted third party which may well be IBM.
It's MUCH more than that. The BAE RAD750 shares the core architecture of the Apple part but is redesigned and fabricated on an entirely different process line. NASA is just one small customer as these are also used by DoD and commercial customers.
 
Maybe there is a valid reason for using old chip sets that maybe could have been alluded to by the author? Barring that, it just makes it sound like the NASA engineers are a bunch of old guys whose last innovation was what 20 years ago? Who says you can't teach old engineers new tricks?

So come on, what is the reason? anybody? Is it radiation or something?
As SpaceX has shown, shielding for newer chips is not an issue. But think about it: there is no need to run anything graphically or computationally intensive on the rover. It's basically an optimized system to control onboard technologies/navigation system and beam collected data back to earth. Most of the analysis will be done on earth with vastly more powerful computers. The G3 chip is also very thoroughly tested at this point after being in use on many aerospace systems. The choice is between [wasted potential + potential new bugs] or [enough capability + little to no bugs].
 
Last edited:
The only connection to this chip with Apple is that they used to use it in the I-Macs. The chip is produced by IBM and Motorola and was supplied to Apple and several other companies. As I understand it, the temperature and Radiation shielding which ups the cost significantly is done by NASA or their contracted third party which may well be IBM.
I don’t think it’s immediately clear how large or small of a role Apple played in the AIM Alliance, from a hardware perspective. As the primary customer and largest user of the chips that the alliance made, certainly Apple had a significant role in determining the processor roadmap and feature set, even if they themselves weren’t designing the chips. And it seems that Apple had significant developmental input on the PowerPC chips through the first few generations (601, 603, and 604 for sure, likely up to the 750*). The reason the Intel switch occurred was that IBM wasn’t able to get the performance Apple demanded from the pro and prosumer chips on the high end and wasn’t able to maintain the thermal performance Apple wanted on the low end (sounds a lot like the reasons for Apple’s transition from x86 to ARM, actually).

* The development of the PowerPC 6xx series was definitely driven by Apple as part of the AIM alliance (Freescale retroactively renamed the 603 series as the G2). The 7xx series (especially the 740 and 750, the first chips in the series, better known to us as the G3) also seems to have been designed by the AIM alliance, as opposed to Motorola or IBM specifically. It seems that the 74xx series (the G4) was developed primarily by Motorola for Apple, and the 74xx series doesn’t have the same degree of usage outside of Apple that the 7xx series does. The 970 was designed by IBM, and it seems that, while Apple’s needs were one of the market drivers, it was always intended for industrial uses, and IBM used it quite heavily in workstations and server environments. So it looks like the PowerPC series has significant design contributions up until around the 74xx, and Apple certainly didn’t play a significant role in developing the 970 series.
 
I don't know how many times I can say it in this discussion: This. Is. Not. Accurate.
Exactly, it makes no sense comparing a Starlink satellite with Perseverance.

Starlink is the closest we come to a disposable satellite, designed to cut costs, at a price less than of 0.5 M$ each. With several tens of thousands Starlink satellites, with less than 5 year life expectancy in the "radiation safe" very low earth orbit zone, they have to replace at least 15-20 each day to keep coverage when fully deployed. Failure is expected, redundancy is massive.

Perseverance on the other hand is a 11 year project at 3 billion $. A "no-backup" mission, staying years in harsh radiation environments. It is crucial to minimize every single source of failure (as long as it does not make the system significantly heavier).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.