Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Rincewind42 said:
How do you come to this conclusion? For the most part, the 970 like most other CPUs executes simply integer instructions in one cycle. Where you get a two cycle latency is if the instructions are dependent, but the compiler or software writer can usually produce code that avoids the latency (or at least masks it). Obviously, code that was optimized for non-970 PowerPCs may have code sequences that expose this issue, but code optimized specifically for the 970 should be fine.

After reading this from Ars Technica (Bad Andy)

me (I have inserted some small clarifying parenthetical remarks):
That 2-cycle {simple integer latency} comes from the "extra cycle of cross-over latency" imposed throughout the design, due to the duplicated register-file/execution unit systems.

IBM has clearly stated (somewhere, I can't find it) that the fundamental simple-integer pipeline latency is one (and by god at these frequencies it would be really, really embarrassing if the simple integer pipeline latency WEREN'T one. ) The extra cycle is the cost of communicating the pipeline result to the OTHER register-file/execution system in the pair. IBM keeps the latency fixed at this worst case (whether dependent instruction is in the same unit's issue queue or not) as a matter of issue-logic simplicity (it also avoids what otherwise would be another compiler optimization.) IBM stated publicly that they studied the cost of allowing dependent instructions in the same queue to issue on the next cycle, and decided "it wasn't worth it.")

Of course "wasn't worth it" depends on who is doing the looking and what their metric is. It is really goddam clear how badly that 2-cycle latency impacts serialized small-integer code... worst-case the 970 becomes a 1/2 IPC processor. This just slays some types of common code (simple ill-optimized compilers and interpreters particularly)... and we see this in performance scores.

With power5 IBM uses SMT to further bury this ... (two serially-dependent threads running simultaneously at least manage 1 IPC Frown ... but more fundamentally it is statistically far less probable to have two serially-dependent code sections working at the same time, and more generally the integer section is often a work-rate limiter for other poorly optimized codes (even FP!) ... so if one thread is serially-dependent there is a good chance the other thread can advantage itself of the 1.5 integer IPC available!)

And it would seem like you where right about the simple integer bit, looks like I remembered wrong. :(
 
new materials

JOD8FY said:
I also think that the new PB's could have a carbon fiber case.

Aside from that, this news sounds great. Can't wait to see how it turns out.

JOD8FY

Alas, carbon fiber glass has very poor heat conducting properties.

My guess is they will use a magnesium alloy. It's a bit more expensive than aluminium but has better heat conduction and it looks even more exclusive (brown/gold glow). The only problem is that it also shields EM radiation better, so prob bye-bye to Airport reception, unless they use some sort of external antenna.
 
netherfred said:
The only problem is that it also shields EM radiation better, so prob bye-bye to Airport reception, unless they use some sort of external antenna.

Something other than the exisitng external antennas they employ on the sides of the screen? I would hope this would suffice...no point having some silly thing protruding from the back or side of a laptop.
 
Look, I haven't done any hardcore benchmarking, but my experience with the the Pentium M isn't very good. I bought a 1.5 Ghz (or was it 1.6) Pentium M VAIO for my mother a few months ago. I really was under the impression that the Pentium M was a pretty fast chip, so I felt that speed wasn't something that would be an issue. However, I was surprised to discover that in terms of overall responsiveness the system was quite slow. Application launching and switching take forever (compared to my PB). Almost every part of the system that would contribute to the general perception of speed is slow.

So I'm not sure how the two computers would fare in a Photoshop showdown or Unreal Tournament botmach. I'm quite willing to accept that the Pentium M would own the G4. However, what's the point of having a fast processor when the system just feels slow as ****. This is just a continuation of the age old problem, it doesn't matter how much faster your computer gets, Windows just keeps getting slower.

This has got nothing to do with being apologetic for Apple, it's a very simple real life observation. I would NOT want to invest in a Pentium M laptop after the experience with this one (and a few older windows laptops I've used). I don't care how many frames a second its running in Unreal if the responsiveness of the system is pathetic. Quite frankly, if you had anyone use the two systems for general everyday tasks (without giving them any prior information about the supposed speed of the two systems) I think just about anyone would come to the conclusion that the Powerbook is substantially faster.
 
MacinDoc said:
I still think the 3 GHz chip and the low power chip are 2 different chips (the low power being 1.6-1.8 GHz for PowerBooks). With the extra L2 cache, the 3 GHz 970 GX will probably still put out as much heat as the 2.5 GHz 970 FX.

I know it's hard to believe, but it is possible that they are the same chip. IBM's own slides show that if they could drop the voltage to 0.8V then power consumption of a 1.25Ghz 970FX would be only 15 watts (from 100W at 1.3V & 2.5Ghz). I doubt Apple will be selling a 1.25 Ghz G5 soon, but assuming that the 970GX can reduce wattage further then it might not be unreasonable to see a 970GX in the range of 1.6 - 2.0Ghz that might be suitable for a PowerBook. And it makes more sense to put your resources into making one power efficient CPU than one low power and one really fast, because if you have a CPU that can work at a really low power, then if you cool it enough then you can make it really fast too :).

And yes, I consider the P4 vs P-M from Intel an anomaly. They designed the P4 for one purpose - to crank the clock as high as possible to dupe consumers into thinking that Mhz was all that mattered. Now they have P-M machines that are beating P4 machines at half the clock speed and can't crank the P4 up any higher so they have to go to model numbers to get them out of the situation :D.
 
Good summary - a couple of comments

sjl said:
Net result: your typical compiler has four registers it can rely on; maybe up to eleven if it's really careful about the way it does things.

Ars technica says 8 general purpose (within your 4 to 11 range).

The P6 and P4 actually have 40 to 128 internal general purpose registers for the micro-engine to use (depending on how you define them), a technique called "register renaming" is used to alleviate issues caused by the small number of GPRs.

"Register renaming allows a processor to have a larger number of actual registers than the ISA specifies, thereby enabling the chip to do more computations simultaneously without running out of registers. Of course, there's some sleight-of-hand involved in fooling the program into thinking that it's using only eight registers, when it's really using up to 40, but this isn't conceptually much different than the sleight-of-hand that fools the program into thinking that it's running sequentially when it's really running out-of-order."

From: http://arstechnica.com/articles/paedia/cpu/pentium-1.ars/5

"The x86 ISA has only 8 GPRs, but the P4 augments these with the addition of a large number of rename registers: 128 to be exact. Since the P4 keeps so many instructions on-chip for scheduling purposes, it needs these added rename resources to prevent the kinds of register-based resource conflicts that result in pipeline bubbles."

http://arstechnica.com/articles/paedia/cpu/p4andg4e2.ars/2

PPC970 has 48 rename registers (http://www.xbitlabs.com/articles/cpu/display/powerpc-g5_13.html).

I know that you hinted at this as "hand waving" in your message, but in context I feel that it's an important detail.



sjl said:
On x86, it'll be constantly swapping data in and out of memory so it can play around with the registers -- kinda like if you're trying to correlate several documents, but only have room for two on your desk at once. On PowerPC, on the other hand, it can load the bits of data into the general purpose registers once, play around with them to its heart's content, and flush the end result out to memory at the end. Net result: you lose a bunch of memory accesses, and everything's a lot faster.

Many benchmarks fail to substantiate your claim of "much faster". The internal architecture of the P6 and Pentium 4 overcome some of the constraints of the baroque x86 architecture.



sjl said:
(I don't know if the 32 bit mode has been extended to include the new registers, or not. I'm guessing not, but I won't swear to it. IMO, it should have been, but I wasn't involved in the design. :D)

Therein lies the reason (or a very large part of it) for the speed boost when you compile code for x86-64 over x86-32: the extra registers. Yes, you lose out on the amount of data you're shuffling around -- 64 bits vs 32 bits, you've got a lot more data coming over the memory bus -- but you gain because you cut the amount of data shuffling you need to get the work done. The latter outweighs the former in the vast majority of cases.

There wouldn't have been a lot of value to extending the 32-bit mode - you'd have x86 incompatible code that would require a 64-bit processor to run in 32-bit mode. As long as you're forcing a recompilation, byte the bullet and compile to 64-bit mode - overall it keeps things simpler (and one would expect that over time 64-bit becomes the predominant target).

Also, using the 64-bit general purpose registers doesn't imply moving more data to/from memory. You can still use those for 8-bit, 16-bit and 32-bit data - no additional data movement required for the extra registers.

Pointer are always 64-bits, however, so that there is more memory traffic when loading and storing address data. This makes your claim of the advantages (of the extra registers) outweighing the cost of additional memory traffic even stronger.

Tom's Hardware has a nice graphic of the registers (new and old) at http://www.tomshardware.com/cpu/20030422/opteron-06.html.
 
Yvan256 said:
Sorry to say, but your test is meaningless because you were ripping from different CD/DVD drives. Your test is not only about the CPU power.

What you should do is test the same machines again, directly from an already-dumped WAV/AIFF file. Even then it wouldn't really be a good test because iTunes on Windows could be running on some kind of emulated platform (is it *really* a native Windows application? I find the page scrolling in iTMS incredibly slow)

Of course, it would be a good iTunes test, just remove the CD ripping part of the test.

I know it's not the best test, but if the Mac was as CPU bound as I would have thought, I would have seen results similar to the Duron (which most definitely was CPU-bound) of 2x. If I have some time this weekend I may try it again with the same files on all computers. Extract to AIFF and convert and import in iTunes.

I'm sure iTunes is a native Windows app. Apple does know how to do Windows apps - look @ Quicktime...
 
GOODLUCK!!!

Dr. No said:
Is it very likely that we will see a new G5 desktop in March? :confused:
If we see these things come out in march how long will it take for shipping??? say... 4 months later??? so now it is june/july? And what about heat issues? will the fans be running at full speed all the time... i think they have a few issues to work out before getting this to the market.... unless this is the fix. I hope not

Chris
 
Hector said:
the 68000 is a cisc chip as is all 68k moto chips the first risc chip that made it's way to the mac was the ppc 601.

2nd this.

Regarding the itunes encoding session posting, my sisters stock eMac 1.25Ghz routinly hits 18X encoding speed. (With Error checking)

A lot quicker than the 7.2X (Error checking disabled) my 2Ghz XP Win 2000 max's out with.
 
powermac G5

What does this mean for the powermac?

Does anybody think there is going to be any revisions soon with that?!

I am going to an IBM conference tomorrow that deals with the Power5 processor, so I hope to hear some stuff there.
 
Can't wait to get my G5 Powerbook

Great! Finally a possible path to a G5 Powerbook.

I can't wait to get one, so Virtual PC will run a little faster. ;)
 
New G5 -> imac?

I need to buy an imac within the next few months. I'll be going for the 17" model. What do y'all think the chances of some of those faster G5's making it into an imac anytime soon?
 
lukelukeluke said:
I need to buy an imac within the next few months. I'll be going for the 17" model. What do y'all think the chances of some of those faster G5's making it into an imac anytime soon?

Doubtful. The new G5 iMac was just released, they would not introduce an update of this nature so quickly to the product line. Don't expect another iMac update until spring, is my guess. :cool:
 
lukelukeluke said:
I need to buy an imac within the next few months. I'll be going for the 17" model. What do y'all think the chances of some of those faster G5's making it into an imac anytime soon?

The iMac G5, is a fine machine IMHO.

Load it up with ram and you will be pleased. :D

The 17 inch 1.8 Ghz model seems likes the BEST DEAL from the product line, reasons being that unless you are working on PRO VIDEO, and GRAPHIC work you are paying extra for the 20 inch screen in which case you are better off settling for a PMG5.

I hear some people buy the 20 inch iMac G5 to use as a TV if you plan on doing so then settle for the 20 inch however I still feel the middle model has the best of both wolds. :)

Best of Luck on your purchase since the iMac G5 will not be revised and given with Apples pushing its revisions longer in the recent you will most likely see a new iMac G5 rev B by WWDC05.

And of note these NEW chips will be making they way first to the PMG5 and or maybe wishing the PowerBook line.

It will be a good 6-10 months before we see these chips in an iMac G5 AFTER the PMG5 has these, again all depending on production.
 
combatcolin said:
2nd this.

Regarding the itunes encoding session posting, my sisters stock eMac 1.25Ghz routinly hits 18X encoding speed. (With Error checking)

A lot quicker than the 7.2X (Error checking disabled) my 2Ghz XP Win 2000 max's out with.

useing itunes encodeing is totaly flawed, it runs like crap on pcs.a better benchmark would be visualiser fps if your useing itunes, better still somthing more mac and pc friendly like cinebench or an open gl game
 
Hector said:
useing itunes encodeing is totaly flawed, it runs like crap on pcs.a better benchmark would be visualiser fps if your useing itunes, better still somthing more mac and pc friendly like cinebench or an open gl game
I agree that it isn't really a fair benchmark in many ways but it is OK on two points.

First of all it is one of the few common task that normal Windows and Mac users do and notices if it goes slow. Many Windows users use iTunes because it is one of the better music organizer apps + if you have an iPod ...

Second of all it's not any worse than benchmarking how fast Word is able to build a table of contents, or something like that, which PC Magazine did on one of their first G5 vs P4 tests.
 
Hector said:
useing itunes encodeing is totaly flawed, it runs like crap on pcs.a better benchmark would be visualiser fps if your useing itunes, better still somthing more mac and pc friendly like cinebench or an open gl game
I've got a counter-example for you. On my 3.2 GHz P4 desktop with 1 GB of RAM and a 200 GB HD, I hit 50x encoding speed once - this happened when making AAC files @ 128 kbps. My iMac, on the other hand, never gets above 15x, and averages around 11x. The wierd thing is that I never got that 50x encoding again - I have been consistently getting around half of that (25x) - which still beats my iMac by a fair margin.
 
wrldwzrd89 said:
I've got a counter-example for you. On my 3.2 GHz P4 desktop with 1 GB of RAM and a 200 GB HD, I hit 50x encoding speed once - this happened when making AAC files @ 128 kbps. My iMac, on the other hand, never gets above 15x, and averages around 11x. The wierd thing is that I never got that 50x encoding again - I have been consistently getting around half of that (25x) - which still beats my iMac by a fair margin.

The question is what are your iMac specs compared to your P4? Is the ram the same? Is the HD the same speed? Is the processor speed even close? If you want to compare Macs to Personal Confusers, compare with similar machines.
 
ASP272 said:
The question is what are your iMac specs compared to your P4? Is the ram the same? Is the HD the same speed? Is the processor speed even close? If you want to compare Macs to Personal Confusers, compare with similar machines.
The specs are on my website - look in my signature if you're interested. Anyway, the point of that post was to show that iTunes encoding on Windows doesn't necessarily have to be horridly slow. I included my iMac purely as a point of reference and not as a benchmark. I'm sorry if that wasn't very clear.
 
JRM said:
Does anyone believe we will see graphics support for a 30" Display in the next PB whether they be G4 or G5's. Does anyone know of portable DDL card? I know Alienware have a 256mb 6800 in one of their laptops but i don't think that it is DDL compliant. Or will apple leave the 30" Displays to the high high end desktop users? :)

When I read that title, I thought: "How the hell would you lug around a PB with a 30" display!?!?"
 
if you can lug 17", why not?

AliensAreFuzzy said:
When I read that title, I thought: "How the hell would you lug around a PB with a 30" display!?!?"

The 17" is already too big for many people - why not 30" ???


ps: 14" 4:3 is fine for me - fits on the airline tray with room for the USB mini-mouse. I don't *want* anything wider or taller!
 
Hector said:
the 68000 is a cisc chip as is all 68k moto chips the first risc chip that made it's way to the mac was the ppc 601.
You could be right. I was reasonably sure it's RISC, though; I honestly don't know for certain.

AidenShaw said:
Ars technica says 8 general purpose (within your 4 to 11 range).
Yes and no. The problem is that some of those general purpose registers are actually used to keep track of where the current base of the stack is (EBP, in conjunction with SS); the top of the stack (ESP); and a bunch of other such stuff -- basically, state data that is needed to support C code. That means they're not available for general data within C programs. That's why I was somewhat handwavy about the number; it depends on what the C compiler does, and whether you're prepared to put up with the inability to easily debug your code (you can choose whether or not to have a frame pointer, which makes debugging easier if present, but frees up a general purpose register if it's not, for example.)

AidenShaw said:
I know that you hinted at this as "hand waving" in your message, but in context I feel that it's an important detail.
I won't argue the point. I just felt it wasn't quite so relevant to the general discussion and the points I was making, is all. :D As I understand it, the register renaming tends to be used for "speculative branching", although I don't see any reason why it couldn't be used to reduce the memory shuffling instead. This is getting into black magic, though, and I don't really understand all the ins and outs of this stuff. I can mostly understand CPUs of the 6502 vintage; anything more recent (say, Pentium onwards in particular) is a black box to me. :D

AidenShaw said:
There wouldn't have been a lot of value to extending the 32-bit mode - you'd have x86 incompatible code that would require a 64-bit processor to run in 32-bit mode. As long as you're forcing a recompilation, byte the bullet and compile to 64-bit mode - overall it keeps things simpler (and one would expect that over time 64-bit becomes the predominant target).
I'm not so sure. SPARC is a case in point: UltraSPARC is a 64 bit processor, but you still see a great deal of software compiled in 32 bit mode for Solaris. This isn't so much for compatibility -- a lot of this stuff is certified for later versions of Solaris that generally run on UltraSPARC-based systems -- as it is for performance, AIUI. If you don't need the 64 bit stuff -- ie, your memory needs are modest, and your integer arithmetic fits fine in 32 bits -- the odds are good that any appropriate Solaris stuff will be 32 bit code, not 64. This is basically where I was coming from. If the CPU works fine with both 32 and 64 bit code, I don't see a general need to recompile 32 bit code to 64 bit code just for the sake of it. x86-64 is a case where it wouldn't be for the sake of it -- you'd be recompiling to take advantage of the extra registers.

AidenShaw said:
Tom's Hardware has a nice graphic of the registers (new and old) at http://www.tomshardware.com/cpu/20030422/opteron-06.html.
Nifty. Thanks for that.
 
m a y a said:
with these new chips the iMac G5 rev B will be more silent as with the situation with the PMG5 rev A ---> rev B situation.


iMac G5 more silent = priceless :D
I thought the iMac G5 rev. A was quiet enough as it is. How could Apple make it produce even less noise while still upgrading it? Also, as far as I'm aware, there wasn't THAT big of a difference in noise between the two revisions of PowerMac G5.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.