Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I know alot of folk have compared the PPC970 against the Pentium 4, but what a comparison, different code and the P$ was a really crap CPU. It was slow when Microsoft did all the patches and updates to windows.

There was simply no way that anyone could and even would compare the two processors. It is amazing that the comparison was ever only mentioned on this forum.

The 970 mp cannot be compared against the P4 or the Core Duo. Overload these cpu's with windows updates and they crwwl.
 
I know alot of folk have compared the PPC970 against the Pentium 4, but what a comparison, different code and the P$ was a really crap CPU.
It sure was, but it was what many PCs used when the G5 was current, so… it seemed appropriate to compare them.

The 970 mp cannot be compared against the P4 or the Core Duo.
Neither can the P4 be compared to the Core Duo. The latter is a dual-core Pentium M.
 
  • Like
Reactions: B S Magnet
It sure was, but it was what many PCs used when the G5 was current, so… it seemed appropriate to compare them.


Neither can the P4 be compared to the Core Duo. The latter is a dual-core Pentium M.
PowerPC 970MP, code-named "Antares" was the dual core CPU form IBM back July 2007
Core 2 processor line was introduced on July 27, 2006


I think that both the P4 and the Core duo were quite poor performers, The problem that has been said with the PPC970 is the heat. Cooling any of these processors really was a heat problem.

I remember back in the days of the 486 cpu, I had a 133Mhz cpu with passive cooling, but I was not happy with this, so I installed a fan to cool the heatsink. I took the machine to the local computer guys to have TAG ram put in and they laughed at the heatsink with the fan on. The funny thing is that this became the norm as from the Pentium 75Mhz.
 
Last edited:
PowerPC 970MP, code-named "Antares" was the dual core CPU form IBM back July 2007
Core 2 processor line was introduced on July 27, 2006
The PowerPC 970MP was announced in July 2005.
Power Mac G5s using the PowerPC 970MP were released in October 2005.

Apple stopped selling iMac G5s in March 2006, Power Mac G5s in August 2006 and Xserve G5s in November 2006. The CPUs the 970 family (installed in Macs, at least) competed with were the Pentium 4/Pentium D/Xeon and Athlon 64/64 X2/64 FX/Opteron. (Yes, if you had bought an Xserve G5 in October 2006 you could have compared it to a Core 2 Duo-derived Xeon rig... what would the result have been?)

I think that both the P4 and the Core duo were quite poor performers, [...]
The Core Duo was a dual-core Pentium M (i.e. a low-power mobile CPU) targeted at laptops and small form factor desktop systems. It was not targeted at high-performance desktop PCs or servers, even though there were Core Duo-derived Xeons. There's no point in comparing a Core Duo to a 970, which was meant to be a high-performance desktop/server CPU with a much higher heat output and power draw.

The Core 2 Duo is not based on the Core Duo (or Core Solo). It uses a different microarchitecture.
 
Last edited:
  • Like
Reactions: B S Magnet
1666186799376.png

Core 2 Duo is socket 775 and the Core Duo microprocessors use Socket BGA479 and Socket M.

Some laptops are better quality than desktop machines, we have to look at the components inside like the ram (the speed and the quality of it, the quality and read and write speed of the HDD and how the device dissipates the heat.

I am glad you reminded me of the cpu types.

You said it there is no point in comparing processors of different types Intel is not a PowerPC cpu and is design as a market filler, Intel is an enormous company who are after profit. Apple is after profit, IBM made a processor that is very capable but could not keep up with Apples demands. Did Jobs ever want Apple to go in the direction that it is heading?

The early Intel Xeons MacPro's could not match the PowerPc 970mp, I proved this years ago with the Photoshop Test.

There will always be a bottleneck with any design in order to cut costs.
 
Last edited:
You said it there is no point in comparing processors of different types Intel is not a PowerPC cpu and is design as a market filler, Intel is an enormous company who are after profit. Apple is after profit, IBM made a processor that is very capable but could not keep up with Apples demands.

IBM produced what they did for their enterprise market. Apple kneecapped the POWER4 architecture with, as noted earlier in this discussion, an insufficient northbridge/U3 for all but the final run of G5s.

All of these corporations are “after profit”. This isn’t a profound take.


Did Jobs ever want Apple to go in the direction that it is heading?

We’ll never know.

Apple, for worse or better, are no longer beholden to an issue called “founder’s syndrome” — the risk of which ended the day Jobs handed over the keys to Cook.
 
  • Like
Reactions: Amethyst1
IBM produced what they did for their enterprise market. Apple kneecapped the POWER4 architecture with, as noted earlier in this discussion, an insufficient northbridge/U3 for all but the final run of G5s.

All of these corporations are “after profit”. This isn’t a profound take.




We’ll never know.

Apple, for worse or better, are no longer beholden to an issue called “founder’s syndrome” — the risk of which ended the day Jobs handed over the keys to Cook.
If we go back in time to the early days, the pc was using 8086 and 80286 cpu's, Atari had the ST etc and the Amiga was born. I remember companies using the Atari because of its software for business and music. The Amiga was used for video work and the Apple was on an Eye straining little screen, Apple's little screen was loved very much and software developers used it for art and graphic design because it was more capable than the competition. Then along came all the other MAC's or something like that.

My point is that no one really cared about the graphics card or the amount of ram or even the speed or the make of the cpu!
 
Last edited:
I am very surprised that no one has started a thread on building an PowerMac G5 accelerator board. This could be done several ways either by ripping off the BGA processor and putting BGA socket and then attaching the Accelerator to this.

My second idea would be to use a PCIe slot and power the card from the main PSU. There would probably be limited possibly with the amount of data you could actually shift across, but I refer to the Atari Falcon 030 which this idea is borrowed from. If you are wondering what they did, they simply designed a mini computer that uses a 68060 CPU instead of the 68030 processor. The boards have extra features like faster ram.

Here are the specs

Features​

  • 68060 (with FPU & PMMU) at selectable clock from 80 to 100 MHz.
  • Bus CT63 at 80 to 100 MHz : SDRAM & 060 BUS accesses.
  • SDRAM PC133 (CL2) at 80 to 100 MHz with BURST accesses & page HIT on open banks.
  • 1 SDRAM PC-133 socket for 64 MB to 512 MB.
  • FLASH of 1MB with BOOT, 060 INIT, TOS4 + patchs. Can be easely updated !
  • Switch to commute between the 060 and the 030 of the Falcon.
  • 100% Falcon 030 mode available. The 68882 is available with the 030 mode.
  • Easy fitting WITHOUT SOLDERS in the original Falcon case, without the Falcon power supply that is replaced by an external standard ATX power supply (or into an ATX tower).
  • 060 Bus Slot for adding a daughter card (EtherNAT).


Hardware add on for CT60​

  • CTCM - a programmable clock module
  • CTPCI - PCI extension allowing PCI cards to be connected. Radeon, USB & Ethernet is support (soon).
  • ETherNAT - network adapter with USB
  • SuperVidel - graphic card with good Videl compatibility. Small Ethernet add-on is available.
There is no reason why something could not be developed for the PowerPC 970
 
This is treading into the Dunning-Kruger pastures.
You may be correct; I am pretty sure someone watching this thread has the knowledge to build such a device. Research is required to decide on the various parts that would possibly work.


Which components would be used?
CPU (The datasheet will contain loads of information)
Ram (This is the easy part to spec)
Memory controller
IO controller for pcie interface (There will be loads of information out there regarding the PCIE connector)

Bios ( This would contain the information about all of the hardware and the settings for speeds etc.

Designing the schematic would be next when you have chosen all the components.

Building a prototype and then writing a possible driver for OSX

Adding extra things to the board once the basic model is up and running.
 
Last edited:
but I refer to the Atari Falcon 030 which this idea is borrowed from

Atari / Amiga and other 68k era machine use much, much simpler technology than G5s. They can't even be compared in such a scope, we are talking about ~10 years of technology progress difference.

No single commercial company ever managed to create any sort of G5 accelerators for either G4 or low-end G5 machines because this was financially not reasonable. Many man-years of top grade engineers are needed to design this kind of hardware.

Also, the reason G5s can not be overclocked and no one managed to do this is simple: G5 chips were binned by Apple/IBM and best silicon was hardly overclocked for high-end machines, worst silicon went into low-end ones.
As far as I remember, IBM's datasheet specified maximum clock for G5s as 2.0 GHz.
So, if you overclock your PMG5 by even a small margin, it will become unstable or won't boot for sure, just because Apple did all overclocking job for you already :)
 
Last edited:
  • Like
Reactions: B S Magnet
The 970MP’s datasheet begs to differ (cf. page 14).
The 970fx's datasheet does top out at 2.0 GHz (cf. page 16).
Considering 970MP I want to note that 2.5 GHz part requires 200 mV more voltage than 2.0 GHz one: 1.0V base vs 1.2V for 2.5 (if we consider same silicon quality with same VDD Fuse code).
In my eyes this looks like a "legal" overclocking by IBM. Aggressive one, by the way, with +20% overvolting.
 
  • Like
Reactions: B S Magnet
Considering 970MP I want to note that 2.5 GHz part requires 200 mV more voltage than 2.0 GHz one: 1.0V base vs 1.2V for 2.5 (if we consider same silicon quality with same VDD Fuse code).
In my eyes this looks like a "legal" overclocking by IBM. Aggressive one, by the way, with +20% overvolting.
And with a much higher power draw. Nonetheless, if a CPU is binned to run at a certain clock speed and voltage I don’t consider it overclocking. To me, that implies running stuff out of spec. But that’s a matter of semantical preference, I guess.
 
  • Like
Reactions: B S Magnet
And with a much higher power draw. Nonetheless, if a CPU is binned to run at a certain clock speed and voltage I don’t consider it overclocking. To me, that implies running stuff out of spec. But that’s a matter of semantical preference, I guess.
Yep, only semantics differ. My main point is that G5s already work with almost maximum frequencies from factory and it seems that possible overclocking won't give huge gains.
 
Yep, only semantics differ. My main point is that G5s already work with almost maximum frequencies from factory and it seems that possible overclocking won't give huge gains.
I wonder if either Apple or IBM did try to push it to 3.0 GHz only to discover that heat and power draw were brutal, if it was even stable that is.
 
Last edited:
  • Like
Reactions: B S Magnet
I wonder if either Apple or IBM did try to push it to 3.0 GHz only to discover that heat and power draw were brutal, if it was even stable that is.

I suspect they most certainly tried — especially so by Apple, given their front-loading of that promise publicly (and the subsequent need to prove, in lab, that it could be done, even if the conditions for doing so could not be scaled for production). In the end, I suspect that the in-lab testing at 3.0GHz was too mercurial a pinnacle to be assured beyond the lab — as the cost to produce the wafers with a cost-effective number of 3.0-capable/verified CPUs, i.e., with the fewest of imperfections in each wafer, was probably never going to be a viable target for either Apple or IBM.
 
  • Like
Reactions: Amethyst1
I suspect they most certainly tried — especially so by Apple, given their front-loading of that promise publicly (and the subsequent need to prove, in lab, that it could be done, even if the conditions for doing so could not be scaled for production). In the end, I suspect that the in-lab testing at 3.0GHz was too mercurial a pinnacle to be assured beyond the lab — as the cost to produce the wafers with a cost-effective number of 3.0-capable/verified CPUs, i.e., with the fewest of imperfections in each wafer, was probably never going to be a viable target for either Apple or IBM.

This is always a problem faults in the wafers. This is reoccurring with every production run.
 
This is always a problem faults in the wafers. This is reoccurring with every production run.

Of course. By the same token, the frequency of faults for a targeted clock speed for each chip on that wafer determines how to sell the bulk of those wafers whose chips can be certified for up to a certain clock speed.

Historical case in point: when Apple rolled out the PPC 7400 in August 1999, they very quickly ran into a problem: the inability for Motorola (and even IBM) to produce enough 7400 wafers with chips which could reliably be clocked to 500MHz. Few of those chips in any particular wafer, given the manufacturing processes for that moment, could be clocked to 500MHz, and demand would completely outstrip supply.

So Apple, in October that year, downclocked their entire, brand-new line of Power Mac G4s, so that the fastest clock speed one could order was a 450MHz CPU. Many of those chips couldn’t be certified at 500MHz, but could at 450MHz. (Of course, Apple kept all G4 prices the same, which infuriated purchasers who’d been waiting for their G4 tower to arrive, only to know that what would arrive would be 50 MHz slower than what was ordered.)

So to look back on what happened in 1999, Apple — atop all the other logistical issues with delivering a 3.0 GHz PPC 970 product in 2004 — probably kept memory of that not-so-distant issue in mind (it had only been five years) when realizing how to deliver a G5 product clocked at 3.0GHz, successfully, was never going to happen, given near- and mid-term technological and logistical constraints.

These constraints included, of course, the paucity of chips IBM could manufacture which could, feasibly, be certified for 3.0 GHz — even minding the amount of power and active cooling needed to accommodate all that generated heat. It just wasn’t to be, and as far as POWER4-based CPUs go, it still isn’t.
 
When did the
Of course. By the same token, the frequency of faults for a targeted clock speed for each chip on that wafer determines how to sell the bulk of those wafers whose chips can be certified for up to a certain clock speed.

Historical case in point: when Apple rolled out the PPC 7400 in August 1999, they very quickly ran into a problem: the inability for Motorola (and even IBM) to produce enough 7400 wafers with chips which could reliably be clocked to 500MHz. Few of those chips in any particular wafer, given the manufacturing processes for that moment, could be clocked to 500MHz, and demand would completely outstrip supply.

So Apple, in October that year, downclocked their entire, brand-new line of Power Mac G4s, so that the fastest clock speed one could order was a 450MHz CPU. Many of those chips couldn’t be certified at 500MHz, but could at 450MHz. (Of course, Apple kept all G4 prices the same, which infuriated purchasers who’d been waiting for their G4 tower to arrive, only to know that what would arrive would be 50 MHz slower than what was ordered.)

So to look back on what happened in 1999, Apple — atop all the other logistical issues with delivering a 3.0 GHz PPC 970 product in 2004 — probably kept memory of that not-so-distant issue in mind (it had only been five years) when realizing how to deliver a G5 product clocked at 3.0GHz, successfully, was never going to happen, given near- and mid-term technological and logistical constraints.

These constraints included, of course, the paucity of chips IBM could manufacture which could, feasibly, be certified for 3.0 GHz — even minding the amount of power and active cooling needed to accommodate all that generated heat. It just wasn’t to be, and as far as POWER4-based CPUs go, it still isn’t.
The new models support a maximum of 16 GB ECC Chipkill DDR2 memory, with a choice of 400 MHz or 533 MHz memory DIMMs. Model 31x, with 1 GB DDR2 ECC SDRAM 400 MHz memory standard, offers two single-core, 2.7 GHz, 64-bit PowerPC 970MP processors in the BladeCenter H chassis (2.6 GHz in all other BladeCenter chassis). The 51x, with 2 GB DDR2 ECC SDRAM 400 MHz memory standard, offers two dual-core, 2.5 GHz, 64-bit PowerPC 970MP processors in the BladeCenter H chassis (2.3 GHz in all other BladeCenter chassis). Each processor has 1 MB of L2 cache per core.

So the 2.6 was this as you described or is overclocked?

I reckon these are single core processors!
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.