It's manufactured and sold by BAE Systems.For anyone interested the 750 in Perseverance is a RAD750 from IBM.
Not sure if someone mentioned it already. But in the space the radiaton is much stronger, these strong energy waves can provide eneugh energy for electrons to make level shift in transistor. Level shifts in CPU (0bit to 1bit) during exploring space exploration is probably the last thing you want. So provide backups CPUs is not a solution when your CPU work but provides you wrong data. Possible solution could be using more CPUs for the same operation but place them all over the machine. Which probably bring question if its efficient and worth of investing time. Plus there is downside that old CPUs reliability is proven by years of testing.Couldn't you fit multiple modern processors in the same or less space though? Ridiculously more computing power (even just using say M1 efficiency cores) with backup chips on board. When sending something to Mars I assume every ounce and every millimeter and every watt maters, I'm a little surprised they're using chips this old and this inefficient. I get that they have to be reliable, but is it really impossible to put together a reliable modern chip?
But according to the article the $200,000 price tag is for the processor used in the rover, so the $200,000 is in today's dollars, right?$200,000 back 23 years ago(1998) is around $2+ million in today's dollars!
Uh, SpaceX boosters have an operational history measured in years - they're in use for far longer than the few minutes that boosters from old space operate.Space X's Starlink internet satellite system has the luxury of operating in low earth orbit, where threats from solar/cosmic radiation are relatively low. Also, their satellites and system rely on redundancy and are relatively inexpensive as is their cost to launch, means that their expected lifetime does not have to be as long as other space-based systems.
Uh... your post in response to mine makes no sense. I wasn’t commenting on what you refer as “Old Space.” Rather it was why Starlink satellite tech does not need radiation hardened semiconductors. Any space assets operating beyond LEO does, unless shielding is employed; which drives other costs.Uh, SpaceX boosters have an operational history measured in years - they're in use for far longer than the few minutes that boosters from old space operate.
SpaceX's Crew Dragon can operate in space for 210 days, 10x longer than the Space Shuttle could.
They're handling the Starship vehicle that should travel to the moon, Mars, and beyond the same as their past vehicles. There's no excuses for the absurdly high prices for decades old technology that we see from Old Space.
A major part of the reason that NASA projects, in particular, use such comparatively old processors is the development time for an interplanetary mission. It can take 10 to 20 years to get some missions off the ground (literally), and changing the processor to a new one requires going through recertification, additional rounds of testing and software revisions, and delays. So they just stick with the same processor from the start of development work to the actual launch. With manned vehicles, it’s even more trouble to change computing elements, since the same vehicle class (or, in the case of the Space Shuttle, the same vehicles themselves) may be in use for 10 to 30 years and would require recertification (for human flight even, which takes longer and is more rigorous) and additional testing.Why so old processor?
No it isn't. That would require 11% annual inflation for 23 years. $200k from then would be $320k in today's dollars if we use CPI.$200,000 back 23 years ago(1998) is around $2+ million in today's dollars!
You're the one who veered the conversation towards Starlink though. I was talking about SpaceX's entire portfolio of hardware, most of which aren't limited to operating in LEO, and most of which are in use for longer than most of the devices from Old Space (aside from the odd interplanetary mission that gets launched every few years).Uh... your post in response to mine makes no sense. I wasn’t commenting on what you refer as “Old Space.” Rather it was why Starlink satellite tech does not need radiation hardened semiconductors. Any space assets operating beyond LEO does, unless shielding is employed; which drives other costs.
You're the one who veered the conversation towards Starlink though. I was talking about SpaceX's entire portfolio of hardware, most of which aren't limited to operating in LEO, and most of which are in use for longer than most of the devices from Old Space (aside from the odd interplanetary mission that gets launched every few years).
Specialized "radiation hardened" hardware is unnecessary. SpaceX accomplishes radiation hardening of ordinary commercial hardware through software - have the same program running on multiple cheap computers, and look for consensus. When one occasionally returns a different answer, consider it damaged and stop listening to it. Hardware failures aren't unique to space - this same practice is done all the time in datacenters on earth, and the practice scales up well and for much cheaper (and with much more up to date hardware) than using 23 year old CPUs at prices many thousands of times higher than they should be.
Proven instruction set, compatible with historical code base, accumulated tribal expertise. RTOS. Lots of reasons.Why so old processor?
As many pointed out there are huge differences in need for radiation hardening in longer space missions relative shorter LEO missions. Radiation hardened cpus can stand up to many orders of magnitude the radiation normal components can. For missions in LEO or lasting days or months it may not be that important. But missions to Mars and beyond take years and may last for over a decade. Missions outside LEO, or even extremer, close to the Sun or Jupiter, are over time exposed to very high radiation levels.Specialized "radiation hardened" hardware is unnecessary. SpaceX accomplishes radiation hardening of ordinary commercial hardware through software - have the same program running on multiple cheap computers, and look for consensus. When one occasionally returns a different answer, consider it damaged and stop listening to it. Hardware failures aren't unique to space - this same practice is done all the time in data centers on earth, and the practice scales up well and for much cheaper (and with much more up to date hardware) than using 23 year old CPUs at prices many thousands of times higher than they should be.
The original Macs used Motorola 68xxx chips. It wasn't until 1994 that the transition to the PowerPC family was introduced.The PowerPC 750 processor was ahead of the game for its time, featuring a single-core, 233MHz processor, 6 million transistors (compared to today's 16 billion in a single chip), and based on 32-bit architecture.
It's clear you do not understand radiation hardening or only understand it superficially. What you are descriving only works for low-criticality systems and for those in low-dose environments.Specialized "radiation hardened" hardware is unnecessary. SpaceX accomplishes radiation hardening of ordinary commercial hardware through software - have the same program running on multiple cheap computers, and look for consensus. When one occasionally returns a different answer, consider it damaged and stop listening to it.
It's clear you do not understand radiation hardening or only understand it superficially. What you are descriving only works for low-criticality systems and for those in low-dose environments.
If you are going to fly in GEO or to Jupiter you cannot solve the problem with redundancy. All your redundant COTS systems will get dosed equally and will all die around the same time, likely before your mission starts anything useful.
It's totally application dependent. A cubesat intended to survive a couple years in LEO is *probably* OK with random off the shelf junk because LEO is very, very benign. For some applications that's totally fine.What I describe will hold up totally fine for all Single Event Effects - individual bits getting flipped for individual calculations.
It won't deal as well with circuits getting damaged. I'm not sure how much this is a problem - how often does this actually happen, and how many are actually consequential? I suspect there's a lot of damage that could happen that would never matter for most operations.
The only connection to this chip with Apple is that they used to use it in the I-Macs. The chip is produced by IBM and Motorola and was supplied to Apple and several other companies. As I understand it, the temperature and Radiation shielding which ups the cost significantly is done by NASA or their contracted third party which may well be IBM.“...and comes with an added $200,000 price tag.”
Behind closed doors at Apple HQ they are currently working on pricing for the new M1x line.
They stop and wonder for a second if the people are ready for a 6 figure Mac.
Thanks NASA.
It's MUCH more than that. The BAE RAD750 shares the core architecture of the Apple part but is redesigned and fabricated on an entirely different process line. NASA is just one small customer as these are also used by DoD and commercial customers.. As I understand it, the temperature and Radiation shielding which ups the cost significantly is done by NASA or their contracted third party which may well be IBM.
As SpaceX has shown, shielding for newer chips is not an issue. But think about it: there is no need to run anything graphically or computationally intensive on the rover. It's basically an optimized system to control onboard technologies/navigation system and beam collected data back to earth. Most of the analysis will be done on earth with vastly more powerful computers. The G3 chip is also very thoroughly tested at this point after being in use on many aerospace systems. The choice is between [wasted potential + potential new bugs] or [enough capability + little to no bugs].Maybe there is a valid reason for using old chip sets that maybe could have been alluded to by the author? Barring that, it just makes it sound like the NASA engineers are a bunch of old guys whose last innovation was what 20 years ago? Who says you can't teach old engineers new tricks?
So come on, what is the reason? anybody? Is it radiation or something?
I don’t think it’s immediately clear how large or small of a role Apple played in the AIM Alliance, from a hardware perspective. As the primary customer and largest user of the chips that the alliance made, certainly Apple had a significant role in determining the processor roadmap and feature set, even if they themselves weren’t designing the chips. And it seems that Apple had significant developmental input on the PowerPC chips through the first few generations (601, 603, and 604 for sure, likely up to the 750*). The reason the Intel switch occurred was that IBM wasn’t able to get the performance Apple demanded from the pro and prosumer chips on the high end and wasn’t able to maintain the thermal performance Apple wanted on the low end (sounds a lot like the reasons for Apple’s transition from x86 to ARM, actually).The only connection to this chip with Apple is that they used to use it in the I-Macs. The chip is produced by IBM and Motorola and was supplied to Apple and several other companies. As I understand it, the temperature and Radiation shielding which ups the cost significantly is done by NASA or their contracted third party which may well be IBM.
I don't know how many times I can say it in this discussion: This. Is. Not. Accurate.As SpaceX has shown, shielding for newer chips is not an issue.
Exactly, it makes no sense comparing a Starlink satellite with Perseverance.I don't know how many times I can say it in this discussion: This. Is. Not. Accurate.