Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It is like Apple and their M2 roadmap, they have taken this route because the Intel Xeon Cpu's have gone far beyond what Apple was prepared to use in their Mac Pro's. Intel has Cpu's that make the Mac Pro look stupid but comparing this to IBM who were stuck in a rut, their processor speeds and being obsessed with limiting the number of cores on the die. I am sure Apple could have used the Cell Cpu and putting in a decent GPU to boost performance. I wonder in the lifetime of the Powerpc what the sales figures were compared to Intel Xeon Mac Pro's?
Apple would at least have had a fighting chance to make decent multi-core PowerBooks and iMacs as early as 2005 had they used Cells/Xenons. Time-wise it could have worked as the Xbox 360 was released a few months after Apple's Intel transition announcement, and the PS3 was released the same year as the new Intel macs, albeit a few months later. Bottom line, Apple would most likely have known about Xenons/Cells being in the works and even produced roughly at the same time as WWDC 2005 and likely beforehand. And yes, any mac that would have used Cells/Xenons would also have been able to run the ppc64 kernel (Panther had basic G5 support and full 64-bit support was introduced in Tiger ppc) as they are very close to the 970, incidentally way before Intel macs could run the x86-64 kernel (introduced in 10.5 for Intel). They could also have used e.g. VMX128 with minimal OS rewrites or dedicated apps.

That said, the Intel transition did make a lot of things easier notably for porting (notably gaming, and also emulation thanks to Cider/Wineskin), yes Rosetta had amazing performance and I am not sure that x86 emulation from a RISC platform is as efficient (CISC-based Rosetta was combining RISC instructions into CISC which would have resulted in a speed boost, as opposed to e.g. PowerVM-LX86 which had to do the opposite, nevermind what IBM claims about its performance which does seem high but on POWER6-7 which both have hypervisor support - missing in the G5). The benchmarks that I could get out of it on the G5 (PowerVM-LX86 under OpenSuse Tumbleweed) are less than stellar, but it could be due to an incomplete environement, see


That said, there are reports of a PB G4 running Quake2 x86 at near native speed under either QuickTransit or a ppc32 version of PowerVM-LX86 (all related as well as Rosetta), but it could very well be that this version made use of the MSR bit to switch endianness (unsupported and absent on the G5 altogether which is why VPC 7 had to be written specifically for G5s without it, while VP6 used it on G4s)

Cheers,
 
Last edited:

The Intel Xeon roadmap looks interesting but Apple have chosen not to follow it.
Which methinks is a big mistake as i9s are on par if not better than M2 Ultra


Mac Pros don't tick the "power-efficient" category checkbox which is the only thing M2 Ultras are better at so M2 Ultras not really needed there. Let alone departing from x86 for developers when the whole computing world outside of Apple is still using x86 for desktop apps; Windows ARM and ARM desktops/laptops are still a long way off from x86 sales. Apple not having launched it over the Xmas break is puzzling, are they having second thoughts? Bear in mind that Xcode 13 beta 1 had references to Ice Lake SP Xeons:


So maybe they are developing both? It would make sense, one version for desktop app developers and another one for macOS/iOS/Android? (the days of the mac mini i5/i7 will be numbered the second the next MP is announced in any case, whatever architecture it's based on).

A new Intel MP (with or without an M2 Ultra MP) would be great news for Intel users, as intel macs would have several more years of support ahead of them (particulary the intel mini still on sale), but this would be unchartered territory for Apple as they haven't exactly done that in the past to say the least, including the bittersweet ppc to intel transition and only ONE extra OS release for ppc (Leopard), they didn't even bother marketing the ppc version of SL even though we now know from 10A190 that there clearly was one in the works (there are references to later internal ppc builds); some people from the G5 crowd including myself use it on an almost daily basis btw. In any case way worse than for 68k -> ppc, for which we were treated with 3 extra releases (7.5.1, 7.6.1 and 8.1). ~3 years support post transition start for both in any case; we are getting close to that mark BUT with intel macs still being sold! Will we finally get payback from Apple for the past two transitions? Wishful thinking 🤣
 
Last edited:
Apple would at least have had a fighting chance to make decent multi-core PowerBooks and iMacs as early as 2005 had they used Cells/Xenons.
The Cell is notorious for being a very challenging thing for program for well.
 
The Cell is notorious for being a very challenging thing for program for well.
Ok, I didn't know, it was however widely used in IBM servers (e.g. QS20-22) and the RoadRunner supercomputer was Cell-based, I would have thought it would be very close to the 970 and the rest of the POWER/PowerPC family that it belongs to? As far as I can tell most ppc64 software runs on it?
 
Ok, I didn't know, it was however widely used in IBM servers (e.g. QS20-22) and the RoadRunner supercomputer was Cell-based, I would have thought it would be very close to the 970 and the rest of the POWER/PowerPC family that it belongs to? As far as I can tell most ppc64 software runs on it?
Cell has a really weird processing model because it only has 1 general purpose Core and then 8 SPEs (synergistic processing elements) that are mostly just vector processors. IIRC the main issue is that the vector processors don't have proper cache or memory coherency with the main core and thus you have to manually manage task assignment and loading. Additionally while they were very very fast at vector processing they aren't general purpose units and thus workload needs to be able to take advantage of these vector processors to see a real speedup from cell.
 
  • Like
Reactions: pc297
Cell has a really weird processing model because it only has 1 general purpose Core and then 8 SPEs (synergistic processing elements) that are mostly just vector processors. IIRC the main issue is that the vector processors don't have proper cache or memory coherency with the main core and thus you have to manually manage task assignment and loading. Additionally while they were very very fast at vector processing they aren't general purpose units and thus workload needs to be able to take advantage of these vector processors to see a real speedup from cell.
True, I had forgotten about that sorry, but can't the PPE can function as a generic ppc64 unit, or is it too slow without the SPEs to be of any use? What about the Xenon, which is more or less a tri-core Cell with VMX128?
 
Cell has a really weird processing model because it only has 1 general purpose Core and then 8 SPEs (synergistic processing elements) that are mostly just vector processors. IIRC the main issue is that the vector processors don't have proper cache or memory coherency with the main core and thus you have to manually manage task assignment and loading. Additionally while they were very very fast at vector processing they aren't general purpose units and thus workload needs to be able to take advantage of these vector processors to see a real speedup from cell.
That's what I remember: it had incredibly high theoretical SIMD performance, but getting real world performance out if it took a lot of effort.
 
  • Like
Reactions: bcortens
True, I had forgotten about that sorry, but can't the PPE can function as a generic ppc64 unit, or is it too slow without the SPEs to be of any use? What about the Xenon, which is more or less a tri-core Cell with VMX128?
The Xenon was also notoriously difficult to program for, which is why the XBox One went back to x86.
 
The Xenon was also notoriously difficult to program for, which is why the XBox One went back to x86.
It did however perform very well notably for dynamic instruction translation from x86 to ppc64 for original Xbox titles via QuickTransit (a.k.a PowerVM-LX86/Rosetta), at near-native speed.

Consisting of 3 modified Cell PPEs plus VMX128, I thought it didn't have the Cell SPEs, in which case it would have been less of a hassle to program for?

Either way, I guess what also contributed to MS going back to x86 after the 360 was the 3 rings of death (sums up the technical issues with the PowerPC architecture at the time, but since the entire POWER family has got a lot better in terms of reliability, thus a shame that it was left behind), Apple having abandoned PowerPC essentially triggering its downfall as far as the consumer electronics market was concerned, and ppc64 big endian beginning to be phased out (in favour of ppc64 little-endian) as porting from little to big endian often requires endianness switching notably for graphics. That's why we still don't have e.g. Half Life for ppc to this day; too many big-endian bugs trying to port Xash3d, or that the latest versions of FireFox aren't available for e.g. Debian or OpenSUSE ppc64, their source needs to be patched everytime for the infamous ARGB bug resulting from big <-> little endian colour order (BGRA)
 
Last edited:
True, I had forgotten about that sorry, but can't the PPE can function as a generic ppc64 unit, or is it too slow without the SPEs to be of any use? What about the Xenon, which is more or less a tri-core Cell with VMX128?
The PPE is a very slow CPU. It's an "in-order" architecture, very simple, with only one good thing : a high frequency for the time.

Actually, the PPC970 was a lot faster than the PPE. And it was a problem with Microsoft : the first Xbox 360 SDK is a Power Mac G5 with two PPC970 (2 GHz) and the second SDK use the PPE (three core) and was a lot slower.
 
The PPE is a very slow CPU. It's an "in-order" architecture, very simple, with only one good thing : a high frequency for the time.

Actually, the PPC970 was a lot faster than the PPE. And it was a problem with Microsoft : the first Xbox 360 SDK is a Power Mac G5 with two PPC970 (2 GHz) and the second SDK use the PPE (three core) and was a lot slower.
but the PPE would have been the one doing Xbox 1 emulation via QuickTransit which was very efficient - haven't yet managed to get x86 dynamic translation to this level of speed on the G5 via PowerVM-LX86 which is QuickTransit
 
Emulation instead of showing true performance of processor and this is Apple's idea. There are the folk out there that are dedicated to spending all their hard earned cash on the latest MAC because the Jones's have just ordered one. They do not really need the very latest because their existing Mac never got pushed to 100% as they only ever surfed the web and sent and received emails.

Some folk do not trust the 1st generation of a new product dew to performance and product design flaws and they would sooner wait until they are tried and tested by others. The software that they rely on as not been written in time for the release of the new product.

Some folk are made to wait until the software as been developed.

Some wait until the price comes down and but a decent second user one saving them a small fortune.

Some of us, purchased a Mac all those years ago and find it does the job we purchased it for even today. What we are after with our Mac's is a way to increase performance without damaging anything. We want the same as the Amiga folk and the Atari Falcon users have got a dedicated group of folk who can create or modify the tech that is out there to work for us to give us the latest things like a Thunderbolt port and a SATA 3 compatibility etc.
 
Last edited:
  • Like
Reactions: pc297
Emulation instead of showing true performance of processor and this is Apple's idea.
That's now indeed the third time that we fall for it, and often the 1st gens aren't nearly as good as the previous architecture, as was blatantly demonstrated by the 6100's dismal performance vs e.g. the Quadra 840AV, the very first Mac Pro (entry model) outperformed by the Quad, however I am not so sure about M1s vs last gen Intel macs. In any case, for the last three transitions this wasn't even Apple's idea as the 68k emulation layer was provided by Motorola and IBM - granted the 601 was the birth child of IBM, Motorola and Apple; as for Rosetta which has been used for the last two transitions that's IBM (originally Transitive with QuickTransit).

Speaking of which what would be really great would be to get x86 dynamic translation to work efficiently - essentially the reverse of Rosetta - so far performance is low, and under Linux so far, that is.
They do not really need the very latest because their existing Mac never got pushed to 100% as they only ever surfed the web and sent and received emails.

Apple regularly makes sure of that by dooming rather recent models to older OSs (and thus functionality and add-on potential as supporting kexts for the latest add-ons are only made for the latest OSs)
Some folk do not trust the 1st generation of a new product dew to performance and product design flaws and they would sooner wait until they are tried and tested by others. The software that they rely on as not been written in time for the release of the new product.

Some folk are made to wait until the software as been developed.

Some wait until the price comes down and but a decent second user one saving them a small fortune.

Precisely, this has happened after every transition: we had to wait until System 7.5.1 to have a native ppc operating system, before that PowerMacs were running 7.1 in emulation mode; and following the os9 to osx, intel and Apple silicon transitions, emulation was used for existing apps (Classic and Rosetta), even though developers often scrambled to make updates e.g. Carbonised apps, fat or universal binaries

Some of us, purchased a Mac all those years ago and find it does the job we purchased it for even today. What we are after with our Mac's is a way to increase performance without damaging anything. We want the same as the Amiga folk and the Atari Falcon users have got a dedicated group of folk who can create or modify the tech that is out there to work for us to give us the latest things like a Thunderbolt port and a SATA 3 compatibility etc.

Speaking of which, are any of the USB3, SATA and Thunderbolt kexts AOSP/available in Darwin?
 
That's now indeed the third time that we fall for it, and often the 1st gens aren't nearly as good as the previous architecture, as was blatantly demonstrated by the 6100's dismal performance vs e.g. the Quadra 840AV, the very first Mac Pro (entry model) outperformed by the Quad, however I am not so sure about M1s vs last gen Intel macs. In any case, for the last three transitions this wasn't even Apple's idea as the 68k emulation layer was provided by Motorola and IBM - granted the 601 was the birth child of IBM, Motorola and Apple; as for Rosetta which has been used for the last two transitions that's IBM (originally Transitive with QuickTransit).
I think that folk are blinded with the name that Steve Jobs and crew set up and it became like Ferrari a product only a few could afford and only few would use because it was sold as a publishing computer. This was really because of the software houses who were prepared to develop software for the MAC.

Emulation that is not bench marked, Why has there never been any standard for benchmarking a MAC?
Is this because Apple control everything and their emulation trick will fail drastically.
Folk seem to like to brag about emulation, the thought of emulating a CPU to run software then why waste money on a M2 MAC when someone could write an half decent emulator to run the non existent software that has been developed. Sorry unreleased M2 compiled software.

Mac OSX Big Sur was presented as version 11 in 2020, and macOS Monterey was presented as version 12 in 2021, this was a real screw up. Apple can no longer call it OS X surely because we had OS 9 then X meaning 10. so eleven would be XI. OSXI aka version 11 and so on.

I am sure that Xeon's could emulate the M2 without and loss in speed. Until it is done then we will never know.
Especially this one that magazines recently claim are outdated.

2.5GHz 28‑core Intel Xeon W processor, Turbo Boost up to 4.4GHz, can the M2 extreme match this Intel Xeon. I very much doubt it!

How do you run MS Windows natively on an M2 as well as OSX? A lot of folk want to do this!

I change my thoughts slightly and rant on about Apple's development of software in particular Apple Aperture, this is a fantastic application that even made it to version 3 for the Intel Mac's. Apple came up with a substitute that was to be 100% better in most ways and in reality it is 100% gimmicky because someone at Apple wanted it to be part of OSX.

I really wonder if the truth was that Adobe paid Apple off to remove the software off the market or Apple was using code that was already copyright to Adobe and the easiest thing to do was to remove it off the market. It seemed to be silly about not updating it enough as an excuse. I wonder if the sales forecast for Apple Aperture was set far too high and the software never met its sales targets. You just don't kill two applications Aperture and iPhoto for the crap that is embedded in OSX that matches neither or the two discontinued applications.

I run Aperture on both my Quad G5 and my Mac Pro 1.1 running Lion.
This returns me too my point I use the old stuff, as we know its problems and can look for the fixes that have been created. I can use my PC with Windows 10 for anything else that I need.
 
Last edited:
Either way, I guess what also contributed to MS going back to x86 after the 360 was the 3 rings of death (sums up the technical issues with the PowerPC architecture at the time. . .
I think the decision to return to x86 was more a common sense decision to both their console and the game devs on the same architecture and using the same dev tools. It's much easier to port a version of game from the current gen Xbox/PS5 to PC because they're all running x86 and DX12. Sony did the same thing PS3 to PS4, for the same reasons. There's no sense fracturing your dev base when it's a certainly AAA games will need to be released for both platforms.
 
I think the decision to return to x86 was more a common sense decision to both their console and the game devs on the same architecture and using the same dev tools. It's much easier to port a version of game from the current gen Xbox/PS5 to PC because they're all running x86 and DX12. Sony did the same thing PS3 to PS4, for the same reasons. There's no sense fracturing your dev base when it's a certainly AAA games will need to be released for both platforms.
Then since MS now has windows and DirectX12 on ARM, it could well be that MS now has a clear shot for the next Xbox if they want to switch over to ARM as Nintendo did directly from PowerPC to ARM

But then again if Apple hadn't switched over to intel, all bets are off as to whether MS and Sony would have stuck to the PPC arch or not had PPC dev kits still existed (notably for MS with the G5 and any potential follow-up POWER5-7 -based PowerMac models)
 
Last edited:
Then since MS now has windows and DirectX12 on ARM, it could well be that MS now has a clear shot for the next Xbox if they want to switch over to ARM as Nintendo did directly from PowerPC to ARM

But then again if Apple hadn't switched over to intel, all bets are off as to whether MS and Sony would have stuck to the PPC arch or not had PPC dev kits still existed (notably for MS with the G5 and any potential follow-up POWER5-7 -based PowerMac models)
PPC was a dead platform for mainstream needs. IBM was focused on POWER server chips (still are) and had no interest in developing desktop or laptop chips. And Motorola was headed towards the embedded market, where PPC's low heat/low power was, and is, perfect. Apple needed lots of laptop chips which could compete with Intel's best, and the only place to get them was Intel.

Just as evolution doesn't care about perfect, but just good enough, so the desktop space doesn't go to the perfect ISA, but the one that's good enough for the wide variety of needs. In 2005 that was definitely Intel, and now, obviously, Apple thinks it's ARM. Given that Intel and AMD's new flagship chips are hitting 100C in stress tests trying to keep up with each other, they may be right.
 
  • Like
Reactions: Amethyst1
Design Revision Level 970MP
DD1.0 - 0x00440100
DD1.1 - 0x00440101

Two version of the 970MP, I wonder if the first is the prototype version. I will keep reading. How do we get OSX to tell us the processor revision?
 
Well simple - if you read the intel guide then read the Data sheet on the MP 970 you discover things are done differently on the IBM 970MP, like JTAG overrides everything and according to the data you get the various speeds by division.

Does Apple use the JTAG as well as the PLL to lock the CPU speed?

JTAG Interface​

The physical JTAG interface, or test access port (TAP) consists of four mandatory signals and one optional asynchronous reset signal. Table 1 below summarizes the JTAG TAP signals.

AbbreviationSignal>Description
TCKTest ClockSynchronizes the internal state machine operations.
TMSTest Mode SelectSampled at the rising edge of TCK to determine the next state.
TDITest Data InRepresents the data shifted into the device’s test or programming logic. It is sampled at the rising edge of TCK when the internal state machine is in the correct state.
TDOTest Data OutRepresents the data shifted out of the device’s test or programming logic and is valid on the falling edge of TCK when the internal state machine is in the correct state.
TRSTTest ResetAn optional pin which, when available, can reset the TAP controller’s state machine.

Ref https://www.corelis.com/education/tutorials/jtag-tutorial/jtag-technical-primer/

JTAG enables an adversary to gain low-level control over the chip it is debugging

For instance, an attacker could halt the processor, change the program counter, bypass watch dogs and have read/write access to firmware.

Does Apple in the G5's access JTAG in the firmware at boot to check the speeds of the bus and the CPU in order to lock it in place and stop it from being modified?

This surely indicates that a software patch could be written to control JTAG and access the firmware and change the CPU speed.

More info here; https://www.corelis.com/education/tutorials/jtag-tutorial/jtag-technical-primer/
 
Last edited:
  • Like
Reactions: pc297 and Amethyst1

ref https://www.xjtag.com/about-jtag/what-is-jtag/

Is it a lot of work to create a JTAG test system?​

Using the libraries for standard non-JTAG components provided by XJTAG, you can get a set of tests up and running for your board with no code development. The library files contain models for all types of non-JTAG devices from simple resistors and buffers to complex memory devices such as DDR3. Because boundary scan disconnects the control of the pins on JTAG devices from their functionality the same model can be used irrespective of the JTAG device controlling a peripheral.

Most boards already contain JTAG headers for programming or debug so there are no extra design requirements.

Three simple letters – BGA​

An increasing number of devices are supplied in BGA (Ball Grid Array) packaging. Each BGA device on a board imposes severe restrictions on the testing that can be done using traditional bed-of-nails or flying probe machines.

Using a simple four-pin interface, JTAG / boundary scan allows the signals on enabled devices to be controlled and monitored without any direct physical access.

Recover ‘dead’ boards where functional test would not work​

JTAG / boundary scan tests can be run on any board with a working JTAG interface. Traditional functional tests cannot be run if the board does not boot; simple faults on key peripherals, such as RAM or clocks, would be found using JTAG but would prevent functional tests from providing any diagnostic information.

Our lovely G5 CPU is BGA so this will apply as per IBM data sheet.

So JTAG could be useful for this project of overclocking as it is able to give us data that we cannot normally get hold of. What we have to figure out now is how to use JTAG to access this data then we can change the results to see if we can either underclock the CPU or even overclock it.

A quick google and here we have a piece of software that can test JTAG and show us what is going on, but the question is how easy it to find this our CPU. The problem with software is would work with the Intel Macs but not our PowerPC's

 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.