Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
iMeowbot said:
Horses would be faster than jets, if only the horses could run faster.

It sounds like an essay.
"Horses would be faster than jets, if only the horses could run faster. Discuss."

AppleMatt (no don't)
 
I agree with original poster. G4 is a fantastic chip and if moto hadn't dropped the ball we would still be using them. that said, my dual G5 smokes any G4 system out there and I am sure the core duals will perform better than that. You see chips are not really getting faster. Code is getting slimmer, system buses are increasing, hard drives are spinning faster.

I am still having a hard time seeing how a intel designed mac will be faster than a quad 2.5ghz though.

I bet it isn't..... :)
 
MacTruck said:
I agree with original poster. G4 is a fantastic chip and if moto hadn't dropped the ball we would still be using them. that said, my dual G5 smokes any G4 system out there and I am sure the core duals will perform better than that. You see chips are not really getting faster. Code is getting slimmer, system buses are increasing, hard drives are spinning faster.

I am still having a hard time seeing how a intel designed mac will be faster than a quad 2.5ghz though.

I bet it isn't..... :)

Well......theres the dilemma. We won't see an Intel Powermac until it is faster than a Quad G5.
 
I recall the original G5 tests being hammered by AMD and Intel in certain areas. It would always depend on the test, and more often than not, the person doing the test (i.e., Mac user or PC user). I'll be very interested to see the new design for the PowerMac.
 
johnnybluejeans said:
Furthermore, your "experiments" are already biased because all you want to do is try to come up with some computable number (which may or may not be completely meaningless) that shows a G4 sporting a bigger number than an Intel processor.

And if you are going to compare processors, why choose the G4 vs. Core Duo? Why not a G5 vs. Core Duo? Oh I bet I know, because you have a G4 and you need to make yourself feel better. I understand.


Acutally... I have a iMac G3 DV+ and also, The G5 is a 64 bit processor so right there you can't compare the two. sure, the G5 can do 32 bit calculations like mad but its even faster at 64 bit. I figure that since the amount of data the processor can put out during each cycle or instruction (IE bits or bytes per cycle) is what essentially makes the processor slower or faster in real-life situations. Aditionally, you forget that I also stated that the bus by itself is 4 times faster than the G4's bus and the hard drives are faster and the ram is faster. All those by themselves should give the computer a overall 4 times increase in speed.
 
I understand what you are talking about. I have to explain it to a lot of my friends that are buying a new computer now. They wonder why I advise them to get a new 1.67 processor when there old one was a 2.8? Well you have to think about all the factors. Bottlenecks: CPU, FSB, RAM, and Virtual Memory are the major Bottlenecks. So keeping this in mind think about the faster G4. Does the FSB increase on the faster G4 from the lower G4? No so the FSB must not be a limiting factor if for a fact the Faster Clock G4 is actually Faster than the slower Clock G4. The RAM is another instance where if you have the same RAM in two different G4s you can determine that the RAM speed is not the limiting factor here.

It is really simple when you think about Apple products because they use fairly standard equipement across the board. So you can narrow down what is limiting your speed. The CPU clock cycle and build is what limits the G4. No a G4 is not as fast as a Core Duo because of not the Clock Speed but the Build!

You have to think about chip size. The size of the chip will narrow down the pipeline and the instructions can get through more efficiently. With more instructions getting through faster and more efficient thus the Core Duo is Faster. The Core Solo running at the same clock speed as a G4 would be faster than a G4 because of this. the increased FSB and RAM speeds are things that help indicate this. Also know that almost always the Virtual Memory will be a limiting factor. This is just due to the natural slow movement of Data involved in moving actual disks.

And if we are going to play the What If Game then my what if is: What if My MBP shipped tomorrow and arrived the Day After. Then I would be one happy Camper.
 
shyataroo said:
I do not like intel. thats why I want to find out what the output of each processor is. say the intels 2.1 dual-core output is 160GB's a second of 1's and 0's and the G4's single core output is 70GB's a second I multiply the output in GB's x 1,073,741,824 to get the bytes per cycle of each processor running at their respective frequencies than I multiply the frequencies in Ghz times 1 Billion and divide the frequency by the output and get the answer which than proves using no benchmarks which is faster.




why can't you measure the output in GB's?


I was speaking theoretically.

You obviously never understood anything in computer architecture.

The reason why you can't measure anything in GB's is because a simple multiply instruction on the G4 will be stored (yes, store IN MEMORY) as a collection of add instructions as a loop, and on the intel it is a simple "Multiply X by X".

It is not meaningful to measure it by "GB" or whatever you say, nor is it meaningful to assume "Oh, if i scale the bus speed on the G4 4 times I will get ZOMG! Super G4 that smokes 4 Core Duos!" because there are too many factors in play.

For all you know the G4 is not even capable of using that bandwidth.
 
Eh, a pointless comparison nonetheless. Maybe you can't get past the fact that the new Intel based Apple computers are faster than the G4? Well, it's true, and you'll just have the live with it! I don't care what's under the hood, as long as it does the job and it's faster.

All the G4s that have been developed through the passed 2 years have just been "overclocked". Basically, they've just increased the multiplier to make the higher clock speed, since the FSB will never be increased. The FSB is one of the greatest bottlenecks in a computer's performance! And this is WHY Apple dumped the PowerPC architecture. They wanted a lower power chip, which was the Core Duo. PERFORMANCE PER WATT!

Speaking of the Power Mac (which should be called MacTower? or Mac Pro...), they may launch it with Intel's Conroe processor, which is the desktop based Core Duo processor. It will launch in September with speeds up to 2.66 Ghz, so hopefully it will be faster than the Power Mac G5 Quad 2.5 Ghz. Mac Pro Quad? Hmm, sounds kind of silly! :-D
 
fiercetiger224 said:
Mac Pro Quad? Hmm, sounds kind of silly! :-D

Mac Pro 4X...Sounds Sexy!

But yes it will be faster considering that the 20" Intel iMac is almost as fast as the Quad. And is faster than the current low end PowerMac!
 
What you are saying is pointless but, to say: At lower clocks the G4 is faster than most, but at higher clocks it is slower than a G5 = slower than a Core Duo clock for clock ;)
 
Yeah clock for clock the G4 and G5 are faster than the intel there no point in going agaist that. Also I believe that the G4 where faster clock for clock that the G5 but G5 total speed is faster.

Heck clock for clock the AMD chips blow intels out of the water. But the total speed over all in Flops is about the same for the same lv of CPU, close enough for my point (this is in the desktop line here. I understand those chips better)

The theroical max speed CPU can go is about 7ghz and not a speed one can cross. There are way to improve things with out increase clock speed.
Yes intel going to hit max speed long before AMD but they just find out ways to increase speed.

They way intel increase speed is not a efficence as everyone else. The other designs have more heat issues at the same speed.


Btw this post is not to start an AMD vs Intel war on which is better or faster in total speed. They are close enough for the point of the agrument and close enough in quility for what the point of the post. This is not to turn it in to an AMD or Intel war and no where in here did I say one is better than the other.
 
shyataroo said:
I figure that since the amount of data the processor can put out during each cycle or instruction (IE bits or bytes per cycle) is what essentially makes the processor slower or faster in real-life situations.

I don't know why you cannot get this through your head. About a dozen folks here have told you that what you are "figuring" is wrong. This statistic you have invented for comparing processors is completely useless. Period.
 
MacTruck said:
I am still having a hard time seeing how a intel designed mac will be faster than a quad 2.5ghz though.

Bill Gates once had a hard time seeing how anyone could ever need more than 640K of memory.

Apple isn't going to switch the Power line over until the Intel procs are substantially faster than current offerings. The news as of this weekend is that Intel's QuadCore processors will be in manufacturers hands by the end of the year. Can you say Dual QuadCore? Mmmmm.. 8 cores... drooool.
 
generik said:
You obviously never understood anything in computer architecture.

The reason why you can't measure anything in GB's is because a simple multiply instruction on the G4 will be stored (yes, store IN MEMORY) as a collection of add instructions as a loop, and on the intel it is a simple "Multiply X by X".

Oh well, oh well. On a G4, a 32 bit multiply instruction has a latency of 2 to 5 cycles, depending on the size of the second operand, and a throughput of one multiply operation per cycle. On a Pentium 4, the integer multiply instruction is actually passed on to the floating point unit, which has the desastrous effect that the latency is 15 cycles, with a throughput of one integer multiply every five cycles. AMD processors and the Pentium M family are closer to the G4. (One reason why the Pentium 4 family absolutely blows compared to the Pentium M).

So how was that again with the computer architecture?
 
P4 is basically Larger and Larger and lets see how high we can get the GHz. The P3 was a smart and efficient chip that clocked lower. As you see with the P-M it is designed based on the P3 so they got intelegent rather than Forceful. Think of it this way. Does brute force always win a war? Not always but Tactics can win much better. The P4 is brute and the P3 is a training course. The P-M is the tactical strike.
 
johnnybluejeans said:
Bill Gates once had a hard time seeing how anyone could ever need more than 640K of memory.

Apple isn't going to switch the Power line over until the Intel procs are substantially faster than current offerings. The news as of this weekend is that Intel's QuadCore processors will be in manufacturers hands by the end of the year. Can you say Dual QuadCore? Mmmmm.. 8 cores... drooool.


Well intels roadmap shows a 32 core processor but then again intels old roadmap showed a 4ghz P4. :rolleyes:
 
generik said:
You obviously never understood anything in computer architecture.

The reason why you can't measure anything in GB's is because a simple multiply instruction on the G4 will be stored (yes, store IN MEMORY) as a collection of add instructions as a loop, and on the intel it is a simple "Multiply X by X".

It is not meaningful to measure it by "GB" or whatever you say, nor is it meaningful to assume "Oh, if i scale the bus speed on the G4 4 times I will get ZOMG! Super G4 that smokes 4 Core Duos!" because there are too many factors in play.

For all you know the G4 is not even capable of using that bandwidth.

Just a clarification about multiplying numbers. The G4 implements the RISC architecture, which means it implements the RISC instruction set. In RISC to multiply a number you'd write mult t0 t1, where t0 and t1 are the registers where the numbers are located. That mult instruction is translated by an assembler into 32 0's and 1's, ie, a single machine instruction.

When multiplying numbers it is possible to do so in a single clock cycle, if the chip implements it that way. It is also possible that it would take 32 clock cycles by implementing mult as a series of shifts and adds each done on a different clock cycle. I don't know how it's done on the G4, however, NO processor would ever implement mult x y as series of y additions of x.

Here is an example of the shift and add technique on a
hypothetical 8 bit system:

00101101 times 01011010 (45 times 90)

Notice that 90 = 64 + 16 + 8 + 2 (we know this because 90 is already in binary). So if we could just find 2*45 + 8*45 + 16*45 + 64*45, we'd have the same thing as 90*45. To multiply by 2, we simply shift left, ie, add a zero to our number. This works for exactly the same reason that adding a zero multiplies a number by 10 in our base 10 system. To multiply by 4 we simply shift left two places. To multiply by 8, we shift 3 places, ect.

0000000001011010 (90=2x45)
0000000101101000 (360=8x45)
0000001011010000 (720=16*45)
0000101101000000 (2880=64*45)

Now we add these together. On a real computer this would be done two at a time. I'll just add them ... 90 + 360 + 720 + 2880 = 4050. This is, of course, the right answer. However, we have a problem, our answer is 0000111111010010 which exceeds our 8 bits of storage in our registers. Therefore, we store half of our answer in a special register called mfhi, and half in another special register called mflo. We know that 16 bits will always be enough space for two 8 bit numbers multiplied together because 2^8 - 1 is the largest 8 bit number, and that squared is less than 2^16 - 1, the largest 16 bit number.

Interestingly, this is very similar to how the ancient Egyptians did multiplication. It's actually surprising with how similar the two systems are, that the Egyptians didn't invent binary.
 
gnasher729 said:
Oh well, oh well. On a G4, a 32 bit multiply instruction has a latency of 2 to 5 cycles, depending on the size of the second operand, and a throughput of one multiply operation per cycle. On a Pentium 4, the integer multiply instruction is actually passed on to the floating point unit, which has the desastrous effect that the latency is 15 cycles, with a throughput of one integer multiply every five cycles. AMD processors and the Pentium M family are closer to the G4. (One reason why the Pentium 4 family absolutely blows compared to the Pentium M).

So how was that again with the computer architecture?

There, this man is a genius :p
 
AJBMatrix said:
P4 is basically Larger and Larger and lets see how high we can get the GHz. The P3 was a smart and efficient chip that clocked lower. As you see with the P-M it is designed based on the P3 so they got intelegent rather than Forceful. Think of it this way. Does brute force always win a war? Not always but Tactics can win much better. The P4 is brute and the P3 is a training course. The P-M is the tactical strike.


Heh heh some got something right. I know the P3 set up has some huge limitation in it and the P4 has some advatages over it. I rememeber readding that they do not intend for the P-M to go 64 bit or it couldnt. Meh doesnt really matter.

Personally I am still a fan of AMD. They do more bang for there buck in R&D. But like others have side it all about the chip design. my AMD 64 3000+ clock speed is only 2ghz but it faster than a 3ghz Pentium when it all said and done and is a good chip. It all about the set up.

I am kind of wondering how much the X86 chip design is going to last. From what I am seeing it is being slowly phased out. The entire industry is in the middle of a huge change right now. X86 is going to be phased out and I expect to see x64 be it replacement. Currently the set up that a lot of chips run is X86-64. The chips a can run both ways. I am really intersted in how things will change. Earily on 64bit will be slower than 32bit but after some time it will be come much faster. *this is dating back to info based on the 16bit-32bit. for a while 16bit where a heck of a lot faster than 32bit. But after some time software and the chips become faster and where pulling ahead of 16bits
 
Phasing out on this large of a scale is so impractical for the timescale that most of people are thinking that it is getting rediculus. I am just going to quote someone in a post on another thread. Yes it is me:

me said:
What percentage of computers are out there that are 64-bit? That is a very low number compared to the number that are 32-bit. So if a company was to make an app that was only to support the 64-bit system then they would cut out a huge chunk of there market. What wise business is going to take out a large portion of the potential customers. Nothing good would come of that and it is a poor business model if they do. I know that there are an increasing number of 64-bit systems that are in production today but how long will it take to phase everything into 64-bit? It would take years. I am guessing in the range of 5 years. I personally do not plan to have my MBP in 5 years and if I do it will be my backup. Just the laws of economics makes this not correct to assume that everything will switch over soon. Supply and Demand: The Supply and Demand Curve sets the price point and the potential profit of an item. So if a company was going to go out there and sell a product to a limited market like this it would be in very short supply to keep the price high enough to make it profitable. So therefore you have to phase out the hardware and then phase in the software. Now what apps are there today that cannot be used on computers that are 2 years old. I am not talking about computers that were low end back then but rather the latest and greatest 2 years ago. Is there any software that cannot be used on it? As far as I know they are all still very capable machines. Just incase you were wondering that was about the time that the P4 with HyperThreading came out. The current High end games might have a problem but that is always the case and due to the gaming companies having a market of people upgrading every six months. But the average computer user upgrades on a few year cycle and this will cause the integration of 64-bit systems to take at least that long after the current high end systems stop using 32-bit chips.
 
AJBMatrix said:
Phasing out on this large of a scale is so impractical for the timescale that most of people are thinking that it is getting rediculus. I am just going to quote someone in a post on another thread. Yes it is me:

64 bit will play a larger and larger role as consumers wish to surpass the 4GB RAM barrier of 32 bit systems. We're already seeing machines come standard with a large fraction of this maximum memory. My guess is that within 5 years almost all PCs will be 64 bit compatible - and that's when the drive to develop 64 bit applications will arise.
 
bah

the pentium M absolutely trounces the G4 and G5 in clock for clock spec scores.

In integer performance:
2GHz dothan had a spec2000 score of something like 1300.
2GHZ G5 has a spec score of something like 900.
2GHz G4 has a spec score of 0, because they don't get that fast, and nobody publishes spec scores for any of the G4's.

In floating point performance(FLOPS!), the G5 scores in the 900-1000 range again, where the pentium M shows in the 600's, from what I recall. Excellent for those scientists who are folding protein strains.

now those are single processor comparisons, and the processor in the MBP and Imac are better and dual core.


This article:
http://www.theregister.co.uk/2002/03/27/benchmarks_demolish_apple_speed_boasts/

posted in 2002 uses a 1GHz G4, which happens to get a 187 (floating point, which is where RISC is supposed to shine)Spec score! it was also equivalent to a 1GHZ pentium III in integer scores.

I don't need to point out that the G4 is only up to 1.67GHz right now, and G4 users are basically using a 5 year old architecture that has been strung out well past it's useful life.

Stop with the "theoretically the G4 is faster" it's a bunch of crap. The G4 is clock for clock nearly equal to Pentium III's, it's just too bad that it is still at pentium III clock speeds.

The Pentium M has been a kick butt chip ever since inception. 90% of pentium 4 performance at 1/2-2/3 the clock speed and 1/4 the power consumption.

The G5 is a fine processor that comes close to pentium in performance (memory latency issues aside) and a quad processor g5 is a smokin machine.

I'm done.

And I think barefeats.com is a fine website who publishes their results without trying to manipulate their results to make apple products win.

The Imac did great for processor intensive tasks, and pretty darn decent in gaming considering it's mid-range graphics card.
 
MacTruck said:
Well intels roadmap shows a 32 core processor but then again intels old roadmap showed a 4ghz P4. :rolleyes:

Really do have to agree with this. Intel are going to end up backing themselves in to another corner, last time out is was a search for pure clockspeed now it is shoving as many cores as it can fit into a single dye. The big issue is software, very little makes use of even dual cores/cpu's and they've been available for years. Most of Intel's own compilers don't fully maximise for SMP.

The Quad G5 is a great machine but as far single app usage goes only Maya rendering and such like will tax all 4 cores.

I realise we have some very proud Intel iMac owners but compared to a well specified dual G5 your machines will be shown up to be what they are; consumer units. A big advantage for the Intel units is there X1600 graphics card which is one of the big reasons it shines over base G5 powermacs and the last gen G5 iMac.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.