Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

shyataroo

macrumors regular
Original poster
Dec 17, 2003
150
1
Hell... Wanna join me?
lets look at the differences in speed according to apple the speed increase is 4 times correct? the major differences in the laptops are the System bus speed has been quadrupled (aka 4 times faster) so assuming that the bus is not 4 times faster because only the frequency is 4 times faster and not the acutal bandwith the ram is also faster (which by itself would make up for the latency in bus speed) and than the Hard Drive that Apple used to test was prolly the 100GB 7200RPM (faster yet again than the PowerBook G4's 5400RPM maximum) so that proves it you get a G4 with the same bus speed, ram speed, hard drive it will run just as fast if not faster than the core-duo and thats only 1 procesor... you get a dual-core G4 in there...and than clock it to the same speeds as the core-duo it will prolly smoke it by 50% (this is just my random estimate)



to help prove this theory I would like to know the maximum output of the g4 processor alone in GigaBytes and the maximum output of the core-duo in gigabytes and while you are at it the G5 as well.
 
shyataroo said:
lets look at the differences in speed according to apple the speed increase is 4 times correct? the major differences in the laptops are the System bus speed has been quadrupled (aka 4 times faster) so assuming that the bus is not 4 times faster because only the frequency is 4 times faster and not the acutal bandwith the ram is also faster (which by itself would make up for the latency in bus speed) and than the Hard Drive that Apple used to test was prolly the 100GB 7200RPM (faster yet again than the PowerBook G4's 5400RPM maximum) so that proves it you get a G4 with the same bus speed, ram speed, hard drive it will run just as fast if not faster than the core-duo and thats only 1 procesor... you get a dual-core G4 in there...and than clock it to the same speeds as the core-duo it will prolly smoke it by 50% (this is just my random estimate)



to help prove this theory I would like to know the maximum output of the g4 processor alone in GigaBytes and the maximum output of the core-duo in gigabytes and while you are at it the G5 as well.
Eh?
And why is this important for you?
There are no dual core G4 chips available to Apple so any comparison is meaningless. And Freescale can't be bothered to increase the bus speed from 167MHz, so there is no comparison.
You can compare them in the real world, but a set of benchmarks for anything are useless. You just adjust the test to prove whatever you want.
 
shyataroo said:
to help prove this theory I would like to know the maximum output of the g4 processor alone in GigaBytes and the maximum output of the core-duo in gigabytes and while you are at it the G5 as well.

First of all the output isn't measuring the gigabyes, its measured in something like FLOPS.

Secondly the G4 might be clock for clock faster than the PentiumM but there is no G4 that runs at that speed. Therefore there is no way for apple to make a computer which is clock for clock to let us compare it to. ( I know you can get over drive G4's but that is a different matter).

Fact is, the G4 in theory could be just as fast but in reality Intel have a chip that is that fast. Why compare something that does not and probably will not ever exist to something that does?
 
Nickygoat said:
Eh?
And why is this important for you?
There are no dual core G4 chips available to Apple so any comparison is meaningless. And Freescale can't be bothered to increase the bus speed from 167MHz, so there is no comparison.
You can compare them in the real world, but a set of benchmarks for anything are useless. You just adjust the test to prove whatever you want.
I do not like intel. thats why I want to find out what the output of each processor is. say the intels 2.1 dual-core output is 160GB's a second of 1's and 0's and the G4's single core output is 70GB's a second I multiply the output in GB's x 1,073,741,824 to get the bytes per cycle of each processor running at their respective frequencies than I multiply the frequencies in Ghz times 1 Billion and divide the frequency by the output and get the answer which than proves using no benchmarks which is faster.

TBi said:
First of all the output isn't measuring the gigabyes, its measured in something like FLOPS.

Secondly the G4 might be clock for clock faster than the PentiumM but there is no G4 that runs at that speed. Therefore there is no way for apple to make a computer which is clock for clock to let us compare it to. ( I know you can get over drive G4's but that is a different matter).

Fact is, the G4 in theory could be just as fast but in reality Intel have a chip that is that fast. Why compare something that does not and probably will not ever exist to something that does?


why can't you measure the output in GB's?

Nickygoat said:
Eh?
And why is this important for you?
There are no dual core G4 chips available to Apple so any comparison is meaningless. And Freescale can't be bothered to increase the bus speed from 167MHz, so there is no comparison.
You can compare them in the real world, but a set of benchmarks for anything are useless. You just adjust the test to prove whatever you want.
I was speaking theoretically.
 
shyataroo said:
I do not like intel.
Why not? Does it really make a difference to you, in day to day usage (when all apps are Universal - one day). which chip is under the hood?
shyataroo said:
thats why I want to find out what the output of each processor is. say the intels 2.1 dual-core output is 160GB's a second of 1's and 0's and the G4's single core output is 70GB's a second I multiply the output in GB's x 1,073,741,824 to get the bytes per cycle of each processor running at their respective frequencies than I multiply the frequencies in Ghz times 1 Billion and divide the frequency by the output and get the answer which than proves using no benchmarks which is faster.
I understand the maths behind it, and did before, but the max for a G4 currently is 1.92GHz. After Apple dumped them Freescale aren't going to be developing the G4 any more.
Intel chips are the future for Apple, like it or not, and will only keep increasing speed, far more than other companies can overclock the G4.
But I don't have an numbers for you :eek:
Is this an academic exercise or a personal preference?
 
certainly would depend on what you're doing

I have a dual 800 mhz G4. I recently bought a DUAL 1.6ghz upgrade for it. Doing the things I do (rendering in Carrara), it still wasn't as fast as my SINGLE 1.6 Pentium M laptop. I sent it back.

There might be something out there the G4 does faster, and you might be able to show some benchmark that says so, but I don't sit around running benchmarks, I run applications. The applications I run, run faster on Intel chips. I'm really looking forward to eovia releasing the UB for Carrara so I can order that new Macbook.

Regards.
 
shyataroo said:
additionally, there are G4's out there that are clocked Higher than the core-duo's laptop processor I belive it tops out at 1.92Ghz

theories are useless when it can't be put into practice.

whether or not a dual core G4 clock for clock is faster then a Intel dual core is moot because the dual core G4 won't exist.

That's the whole point of switching to intel ... Motorola/Freescale (whatever they are called these days) couldn't produce the chips apple needed, and neither could IBM -- Intel delivered.

go ahead, theorize and calculate all you want about faster G4's, i'm not going to stop you ... but the rest of us are you going to live in the here and now and enjoy our faster intel macs that actually exist
 
more info

I just went back and checked the numbers on when I compared my dual 1.6 G4 to a single 1.6 Pentium M (dothan I think so not even as fast clock per clock as the new ones). The single Pentium M 1.6 rendered consistently 25% faster than the dual 1.6 G4.
 
ewinemiller said:
There might be something out there the G4 does faster, and you might be able to show some benchmark that says so

I hear a PowerBook G4 is faster at toasting bread and/or private parts than a Pentium M laptop.
 
shyataroo said:
I do not like intel. thats why I want to find out what the output of each processor is. say the intels 2.1 dual-core output is 160GB's a second of 1's and 0's and the G4's single core output is 70GB's a second I multiply the output in GB's x 1,073,741,824 to get the bytes per cycle of each processor running at their respective frequencies than I multiply the frequencies in Ghz times 1 Billion and divide the frequency by the output and get the answer which than proves using no benchmarks which is faster.

I don't think you really have any understanding of processor architecture. How many bits move through a processor per cycle is not a meaningful measurement of capability or performance as far as getting work done is concerned. What is important is how many instructions each processor can complete in a clock cycle, and furthermore the amount of work those instructions complete.
 
shyataroo said:
I do not like intel. thats why I want to find out what the output of each processor is.

Furthermore, your "experiments" are already biased because all you want to do is try to come up with some computable number (which may or may not be completely meaningless) that shows a G4 sporting a bigger number than an Intel processor.

And if you are going to compare processors, why choose the G4 vs. Core Duo? Why not a G5 vs. Core Duo? Oh I bet I know, because you have a G4 and you need to make yourself feel better. I understand.
 
johnnybluejeans said:
I don't think you really have any understanding of processor architecture. How many bits move through a processor per cycle is not a meaningful measurement of capability or performance as far as getting work done is concerned. What is important is how many instructions each processor can complete in a clock cycle, and furthermore the amount of work those instructions complete.

And if I have this RISC/CISC thing right, shyataroo point is obvious:

* RISC processors have a smaller instruction set (sic) and hence need more cycles to perform the same operation;
* RISC processors "get through" more data on each cycle to make up for this apparent drawback

So the bottom line is, you measure performance in FLOPS and not the amount of bits that pass through the processor each cycle. My understanding of processors is pretty sketchy, but I think my post makes sense.
 
What is the point of all this? There is not, and will not be G4's in Apple machines from here on, which makes this whole "argument" void :confused: :rolleyes:
 
whocares said:
I hear a PowerBook G4 is faster at toasting bread and/or private parts than a Pentium M laptop.

LOL!

The PowerPC is dead to us now, get used to it. No comparison, no miracle, etc is going to change the decision made a year ago.

All that matters to me in the end is that the new machine runs faster than the earlier ones.

Just wait to see what the Powermac becomes (I have heard Apple wants to continue its entrance into corporate/educational sectors, so many cores may be the norm) and what new portable devices Intel can allow.
 
core duos

its all about the whole system. The g5 was slower than the g4 at lower clock speeds often in cases a 1.5 ghz g4 could beat out a 1.6 or 1.8 g5. But when the g5 got dualed and into the upper clock speeds it rocked the socks off of the g4. apple knows what its doing, i trust my steveness.
 
shyataroo said:
lets look at the differences in speed according to apple the speed increase is 4 times correct? the major differences in the laptops are the System bus speed has been quadrupled (aka 4 times faster) so assuming that the bus is not 4 times faster because only the frequency is 4 times faster and not the acutal bandwith the ram is also faster (which by itself would make up for the latency in bus speed) and than the Hard Drive that Apple used to test was prolly the 100GB 7200RPM (faster yet again than the PowerBook G4's 5400RPM maximum) so that proves it you get a G4 with the same bus speed, ram speed, hard drive it will run just as fast if not faster than the core-duo and thats only 1 procesor... you get a dual-core G4 in there...and than clock it to the same speeds as the core-duo it will prolly smoke it by 50% (this is just my random estimate)



to help prove this theory I would like to know the maximum output of the g4 processor alone in GigaBytes and the maximum output of the core-duo in gigabytes and while you are at it the G5 as well.


What in the hell are you talking about?!?
 
whocares said:
And if I have this RISC/CISC thing right, shyataroo point is obvious:

* RISC processors have a smaller instruction set (sic) and hence need more cycles to perform the same operation;
* RISC processors "get through" more data on each cycle to make up for this apparent drawback

So the bottom line is, you measure performance in FLOPS and not the amount of bits that pass through the processor each cycle. My understanding of processors is pretty sketchy, but I think my post makes sense.

peak FLOPS, unfortunately, is also completely meaningless. Average FLOPS during a real task is more useful, but even that is difficult to compare due to the difference in how much work is done per "op". For example, a PowerPC maddfp instruction does a multiply and an add at the same time. Is that one FLOP or two? What if the multiply is by 1 (i.e. no effect)? An x86 instruction might load the value from memory and then add it. How many FLOPS is that. (answer: it's actually broken down into a load and an add internally, much like a RISC processor, so it would count as a FLOP and a memory op. Older x86 processors didn't do this, though, they executed it as one instruction)

Even if they were comparable, it still only measures floating point performance (heck, a lot of FLOPS-fans for PPC are quoting peak Altivec FLOPS, so even more specialized).

The conclusion you can get from this is that it's impossible to measure a processor's speed with a single number. Consider this:

Compiling Adium on my 1GHz G4 takes 10-15 minutes.
Compiling Adium on a CoreDuo iMac takes 2 minutes and 49 seconds.
My 1GHz G4 can run RC5 at least as fast as the iMac.

Which one is faster? They both are. The only useful benchmark is how well it works for what you do. If you can get the job done 20% quicker with one machine, use it. If the program you need doesn't run on one machine, don't use it.

<edit>
and yeah, the OP is ridiculous
</edit>
 
Seriously, how can anyone measure processor performance in terms of BANDWIDTH?!?!?! That means absolutely nothing because it doesn't show how many operations the processor is applying to the data its receiving. You can design a chip that just lets data pass through it without doing much and get insane throughput compared to a general usage processor like the core duo or G4. According to this logic, you can start comparing RAMDACs to CPUs based on throughput but that RAMDAC will be useless for anything other than converting digital video data to analog format for display on a monitor.
 
topicolo said:
Seriously, how can anyone measure processor performance in terms of BANDWIDTH?!?!?!


Dunno, my G3 iBook is networked is connected to 2 Meg DSL but my [hypotheical friend's] intel iMac is only on dial-up. So I guess the G3 is faster than the Core Duo.


iMeowbot said:
Horses would be faster than jets, if only the horses could run faster.

:D :D :D :D :D :D :D :D
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.