Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
with you 100% but go to ANY tech/computer store and what do they market?? faster GHz = better... *sigh*

I think we may be saying the opposite things. I'm saying that more GHz is of course better.

It's better for everything!

Higher speed drives = better
Higher speed memory = better
Higher speed buses = better
Higher speed processors = better

For processors it's just that you also have to consider internal architecture - which is really all that video was saying too! I'm adding that within a given architecture it's the most important spec period!!! Imagine the G4 in that video available in 800 and also 1600 MHz. Which one would you want? MHz matters - probably it matters the most. :)


i actually would be quite happy if hardware manufactors decided to stop making chips and focred the software developers to refine the current software to utilise hardware better!!!

I agree. :) I have graphic apps on the 10-year old amiga 040/40Mhz that run as fast or faster than modern apps on my current 2660MHz machine and the executables are easily a tenth to a hundredth the size. :p And most of it is because the developers were coming from the C=64 platform where you needed to code at very low levels - down to the metal! So they weren't afraid to address the hardware. :)
 
I think we may be saying the opposite things. I'm saying that more GHz is of course better.

It's better for everything!

Higher speed drives = better
Higher speed memory = better
Higher speed buses = better
Higher speed processors = better

For processors it's just that you also have to consider internal architecture - which is really all that video was saying too! I'm adding that within a given architecture it's the most important spec period!!! Imagine the G4 in that video available in 800 and also 1600 MHz. Which one would you want? MHz matters - probably it matters the most. :)

oh!! my mistake *glares*

its hard to put it into context.. if you have a bunch of CPUs, same architecture.. some memory, same cache etc.. then yes MHz becomes quite important as the sole indication of which is more powerful..

however when you start mixing CPUs, architectures, cache, and most importantly - system that its running - then the difference becomes somewhat mixed.

i havent got enough knowledge to make a deduction as to whether the myth is actually true... ill sit on the fence for now.
 
oh!! my mistake *glares*

It was true in the SPECIFIC example of G4 vs. P4 way back then. Today there's still some (but very little) truth to the general "MHz doesn't matter" would-be axiom. MHz is extremely important. Here's an extreme example:


6510 @ 2 MHz
68060 @ 66 MHz
5110 @ 1600 MHz
X5270 @ 3500 MHz

:D

Pretty much with almost no exceptions you can go to this page: http://en.wikipedia.org/wiki/Xeon, pick a CPU just by the clock speed alone, and end up with the fastest execution speed (for single threaded apps) even though there are like 10 different architectures listed there.
 
It's still extremely important tho. Here's an extreme example:


6510 @ 2 MHz
68060 @ 55 MHz
5110 @ 1600 MHz
X5270 @ 3500 MHz

:D

Pretty much with almost no exceptions you can go to this page: http://en.wikipedia.org/wiki/Xeon, pick a CPU just by the clock speed alone, and end up with the fastest execution speed (for single threaded apps) even though there are like 10 different architectures listed there.

Im using my phone and have not got the time to view the link, but does it have the nehalem vs older xeon? I am fairly certain that the 2.93ghz nehalem compares quite well to the 3.2ghz counterpart.

also, campa similar architectures such as all intel would seem to push the results in your favour, however if it were to also include some amd and power pc chips im sure the outcome would be different?
 
I edited my post above a little.


Im using my phone and have not got the time to view the link, but does it have the nehalem vs older xeon? I am fairly certain that the 2.93ghz nehalem compares quite well to the 3.2ghz counterpart.

Point one that's a benchmark app.
Point two there's very little difference in speed there.
Point three it's going to be extremely difficult or impossible for a human user to tell any differences.
Point four the 3.2GHz processor will indeed probably be faster at many things.
Point five the 2.93 incorporates HT virtual core technology.

also, campa similar architectures such as all intel would seem to push the results in your favour, however if it were to also include some amd and power pc chips im sure the outcome would be different?

Yes. That WAS a radical departure in chip architecture at the time. So it made a rather large difference for awhile. The same things can be said of GPU vs. CPU today but we're really comparing apples to oranges at such an extreme point.

Like Maserati Corsa vs. Mazda's Cosmo. At the same RPM you're going to get radically different torques - but given pretty much any V8 comparison it's much more relevant. ;) Hehehe my car metaphors are getting lamer and lamer... :D


.
 
I agree. :) I have graphic apps on the 10-year old amiga 040/40Mhz that run as fast or faster than modern apps on my current 2660MHz machine and the executables are easily a tenth to a hundredth the size. :p And most of it is because the developers were coming from the C=64 platform where you needed to code at very low levels - down to the metal! So they weren't afraid to address the hardware. :)

assembly language?, i did that in uni last year. it wasnt pretty but it ran quick. i actually liked it more then the C++ im learning now and found it easier!! wish the same thing happened these days, there are too many overheads like impressive user interfaces and whatnot. stupid artists :rolleyes:


I edited my post above a little.
got it.

Point one that's a benchmark app.
yea true, but what other ways are there to test which is faster?
Point two there's very little difference in speed there.
the new Nehalem beats the older Xeon (wolfdale??) chips, rendering your "more mhz = better" void ;) (somewhat)
Point three it's going to be extremely difficult or impossible for a human user to tell any differences.
yea good point, comparing them would be like seeing which finger can type faster
Point four the 3.2GHz processor will indeed probably be faster at many things.
thats why we use benchmarks otherwise there is no other way to tell :)
Point five the 2.93 incorporates HT virtual core technology.
a nice leap in architecture design that lessens the overall bottleneck on a computer, i guess this feature is the main reason why the new Nehalem chips are faster then its predecessors.


Yes. That WAS a radical departure in chip architecture at the time. So it made a rather large difference for awhile. The same things can be said of GPU vs. CPU today but we're really comparing apples to oranges at such an extreme point.

yea i agree, you cant compare GPU vs CPU. they are built for different purposes! much like your comparison below :eek:

Like Maserati Corsa vs. Mazda's Cosmo. At the same RPM you're going to get radically different torques - but given pretty much any V8 comparison it's much more relevant. ;) Hehehe my car metaphors are getting lamer and lamer... :D

i prefer not to use car metaphors - yea they are pretty lame :p
 
assembly language?, i did that in uni last year. it wasnt pretty but it ran quick. i actually liked it more then the C++ im learning now and found it easier!! wish the same thing happened these days, there are too many overheads like impressive user interfaces and whatnot. stupid artists :rolleyes:

Sure assembler is faster but it's the virtual layers, APIs, frameworks, and etc. that I mean. Currently it's like telling your friend to call his Mom to check with his Dad to see if it's OK to date your sister. It used to be that he could just ask your sister out himself. Much faster. :) Tho it's lots easier for you just to tell your friend to call.


yea true, but what other ways are there to test which is faster?

A better, more robust, benchmark suite would do!


the new Nehalem beats the older Xeon (wolfdale??) chips, rendering your "more mhz = better" void ;) (somewhat)

Only at some things does it. At other things it doesn't. Besides my entire premiss is based on generality; there are exceptions - tho most of them very minor.


yea good point, comparing them would be like seeing which finger can type faster

He he.. :D


thats why we use benchmarks otherwise there is no other way to tell :)

Depends on what the benchmark is testing. As a rather old example so everyone might follow easier: A 120MHz CPU that didn't have an embedded FPU benchmarked against an CPU that did: The 100 Mhz chip would kick it's butt at FLOPS but the 120MHz would win at just about everything else. If we don't know exactly what the benchmark app is testing the score is 100% useless. We need to consider carefully what aspects of the chip (and system) are being tested for benchmarking to make any sense at all. Additionally a benchmark app that tests each aspect separately in sequence may render a score which is completely unrealistic when compared to real-world operations.


a nice leap in architecture design that lessens the overall bottleneck on a computer, i guess this feature is the main reason why the new Nehalem chips are faster then its predecessors.

Yeah, that and the embedded memory controller unit. :) Mostly the MCU I guess tho. Here is where benchmarking could tell the tale. ;)


yea i agree, you cant compare GPU vs CPU. they are built for different purposes! much like your comparison below :eek:

:D


i prefer not to use car metaphors - yea they are pretty lame :p

I used a dating metaphor this time just for you. :D
 
Sure assembler is faster but it's the virtual layers, APIs, frameworks, and etc. that I mean. Currently it's like telling your friend to call his Mom to check with his Dad to see if it's OK to date your sister. It used to be that he could just ask your sister out himself. Much faster. :) Tho it's lots easier for you just to tell your friend to call.

intersting, wasnt taught that the assembly level was virtual! thats our uni for us :mad: im guessing binary language is nearly the lowest level?

A better, more robust, benchmark suite would do!

that would be very ideal! but how would we do it :-S (thats a msn emoticon, hope you know it!)

Only at some things does it. At other things it doesn't. Besides my entire premiss is based on generality; there are exceptions - tho most of them very minor.

most of the minor exceptions are my points ha! generality > minority :eek:

Depends on what the benchmark is testing. As a rather old example so everyone might follow easier: A 120MHz CPU that didn't have an embedded FPU benchmarked against an CPU that did: The 100 Mhz chip would kick it's butt at FLOPS but the 120MHz would win at just about everything else. If we don't know exactly what the benchmark app is testing the score is 100% useless. We need to consider carefully what aspects of the chip (and system) are being tested for benchmarking to make any sense at all. Additionally a benchmark app that tests each aspect separately in sequence may render a score which is completely unrealistic when compared to real-world operations.

ok well thats fair enough, i guess to benchmark architectures vs architectures then they would need to be equally optimised/written/compared, which would be kind of impossible!

Yeah, that and the embedded memory controller unit. :) Mostly the MCU I guess tho. Here is where benchmarking could tell the tale. ;)

ahh wasnt aware of that - it would be a noticeable increase. too bad they are working on fixing something that isnt the main bottlebeck of the computer..(thats another conversation though!)


I used a dating metaphor this time just for you. :D

*rolleyes as much as humanly possible•

thank god i have a stable girlfriend, i hate dating ;)
 
intersting, wasnt taught that the assembly level was virtual! thats our uni for us :mad: im guessing binary language is nearly the lowest level?

I guess I didn't write that very clearly. :p
Maybe: "Sure, assembler is often faster. I was referring to virtual layers, APIs, frameworks, and etc." or something. :) Also by "virtual layers" I mean like hyper-visors and etc.


that would be very ideal! but how would we do it :-S (thats a msn emoticon, hope you know it!)

Heh! :p


ahh wasnt aware of that - it would be a noticeable increase.

I wasn't aware of it either till someone here mentioned it. I still haven't read anything official myself on the topic. But I haven't looked either. :)
 
I guess I didn't write that very clearly. :p
Maybe: "Sure assembler is often faster. I was referring to virtual layers, APIs, frameworks, and etc." or something. :) Also by "virtual layers" I mean like hyper-visors and etc.

ahh well.. if you put it that way it makes sense! i never was good at reading


I wasn't aware of it either till someone here mentioned it. I still haven't read anything official myself on the topic. But I haven't looked either. :)

thats the first and hardest step i find... stupid motivation. thanks for the talk, very informative :)
 
intersting, wasnt taught that the assembly level was virtual! thats our uni for us :mad: im guessing binary language is nearly the lowest level?
Machine code is the lowest (pure binary). Writing with 1's and 0's... Yuck. :p
Assembly is the next step up, so the code is really lean, particularly when compared to the upper level languages. :eek: ;)
 
Machine code is the lowest (pure binary). Writing with 1's and 0's... Yuck. :p
Assembly is the next step up, so the code is really lean, particularly when compared to the upper level languages. :eek: ;)

ive written in assembly, i found it more 'funner' then the C++ im doing now, however i hate programming. totally and utterly hate it.
 
Yeah, I'm an assembly language programmer as well. I liked it a lot. I like writing functions in asm and using a C compiler for structure and control. It's all in-line too so there aren't multiple steps to compiling an executable. I don't know these new chips though. The last commercial project I was involved with was based on the Motorola 68K family. I know (knew?) the 6502 and the 8088 well. But I wasn't talking about code other than bloat. I was talking about the environment coders choose to run their apps in because it's so rich and makes things easy for them. I was a game developer so that might explain why I think like do. :D OS? OS? Weeeee don't need no stinking OS!!! :D
 
ive written in assembly, i found it more 'funner' then the C++ im doing now, however i hate programming. totally and utterly hate it.
Assembly definitely has it's place, such as programming a microcontroller. ;) C, C++ (and other derivatives) aren't too bad either, but can be more problematic. Experience definitely helps.

So don't get discouraged. You'll get the hang of it. ;) :)
Yeah, I'm an assembly language programmer as well. I liked it a lot. I like writing functions in asm and using a C compiler for structure and control. It's all in-line too so there aren't multiple steps to compiling an executable. I don't know these new chips though. The last commercial project I was involved with was based on the Motorola 68K family. I know (knew?) the 6502 and the 8088 well. But I wasn't talking about code other than bloat. I was talking about the environment coders choose to run their apps in because it's so rich and makes things easy for them. I was a game developer so that might explain why I think like do. :D OS? OS? Weeeee don't need no stinking OS!!! :D
It accomplishes lean code, and is great for something like BIOS, or any other flavor of firmware. :D :p Maybe reminding OS developers of this (*cough*..MS) could give them a clue. :D
 
Assembly definitely has it's place, such as programming a microcontroller. ;) C, C++ (and other derivatives) aren't too bad either, but can be more problematic. Experience definitely helps.

So don't get discouraged. You'll get the hang of it. ;) :)

its not that i dont like it.. its that i cant do it that puts me off. some people have a programming mind and some dont, im one of the latters. im majoring in network & security so there is really no need for intense programming for my job (unless im missing something)... it annoys me that i cant get my head around it, no matter how hard i study. thats life though i guess.
 
its not that i dont like it.. its that i cant do it that puts me off. some people have a programming mind and some dont, im one of the latters. im majoring in network & security so there is really no need for intense programming for my job (unless im missing something)... it annoys me that i cant get my head around it, no matter how hard i study. thats life though i guess.
Well...Someone has to program it... ;) :D :p

Seriously though, I do imagine there is a reason for requiring it. At least in terms of understanding how the protocols (data handling) really work. Perhaps overkill for just creating scripts though. ;)
 
Well...Someone has to program it... ;) :D :p

Seriously though, I do imagine there is a reason for requiring it. At least in terms of understanding how the protocols (data handling) really work. Perhaps overkill for just creating scripts though. ;)

im afraid this poor soul wont be programming it (luckily, imagine the world full of microsoft coders!).

the uni that i am at isnt very.... (i want to say caring, but its worse then that) interested in our technology department. i am doing classes that i dont particularly need (maths with sets, high level programming, website design etc) - i dont need that! i need something that teaches me in-depth details about how protocols work, how to maintain topologies, how the network layer works and all that other crap that i actually enjoy doing!

anyway enough babbling from me.
 
im afraid this poor soul wont be programming it (luckily, imagine the world full of microsoft coders!).

the uni that i am at isnt very.... (i want to say caring, but its worse then that) interested in our technology department. i am doing classes that i dont particularly need (maths with sets, high level programming, website design etc) - i dont need that! i need something that teaches me in-depth details about how protocols work, how to maintain topologies, how the network layer works and all that other crap that i actually enjoy doing!

anyway enough babbling from me.
It's a way of life I suppose for most college students. :p IIRC, it was a result of the acredidation requirements, according to a conversation I had with one of the department heads. :rolleyes: :(

I always wondered if a more direct, condensed format would produce better results. Perhaps patterned after military tech training.
 
It's a way of life I suppose for most college students. :p IIRC, it was a result of the acredidation requirements, according to a conversation I had with one of the department heads. :rolleyes: :(

I always wondered if a more direct, condensed format would produce better results. Perhaps patterned after military tech training.

i can understand where they come from i guess, try to make it as basic (yet still hard) as possible to let as many students come to their uni.. makes sense i guess. but the quality of those students coming out would be fairly poor.

hmmm military training.. for nerds? yea right :rolleyes:

im a fit nerd though so id be cool with it
 
i can understand where they come from i guess, try to make it as basic (yet still hard) as possible to let as many students come to their uni.. makes sense i guess. but the quality of those students coming out would be fairly poor.

hmmm military training.. for nerds? yea right :rolleyes:

im a fit nerd though so id be cool with it
What annoyed me was the idea they needed to "weed out" a percentage of accepted students. I understand the reasoning, job security moreso than quality, but it could be done by other means.

As far as a military aspect, I meant in terms of the education methodology (classroom), not the PT aspect. :D :p
 
It's a way of life I suppose for most college students. :p IIRC, it was a result of the acredidation requirements, according to a conversation I had with one of the department heads. :rolleyes: :(

As a department head myself I can say "yup". If you want it another way seek out a good technical college.

I always wondered if a more direct, condensed format would produce better results. Perhaps patterned after military tech training.

There ya go!


i can understand where they come from i guess, try to make it as basic (yet still hard) as possible to let as many students come to their uni.. makes sense i guess.

No, it's more mandated than that.
 
As a department head myself I can say "yup". If you want it another way seek out a good technical college.[/QUOTE]
To me, accreditation makes far more sense than just out of some sense of spite or cruelty. ;) But I do wish the various boards would update their requirements in a fashion befitting modern needs. At least for the engineering departments. :D :p

There ya go!
I figured places like DeVry would use something along this approach. Makes sense for getting an individual trained in the shortest period of time, but might not be for everyone. You'd likely loose any sense of "well roundedness" as well, as some subjects such as humanities would have to be eliminated to shave time.
 
As a department head myself I can say "yup". If you want it another way seek out a good technical college.
To me, accreditation makes far more sense than just out of some sense of spite or cruelty. ;) But I do wish the various boards would update their requirements in a fashion befitting modern needs. At least for the engineering departments. :D :p

They do it just takes time. 1 year, sometimes as much as three. ;) This is the difference between Universities and a good engineering school like MIT. I worked at Kyoto University, Aichi University, and HAL sougougakuen. HAL is a technical college. Officially in English: "HAL Institute of Computer Technology". Beautiful building, fun place to work! Here's a photo I took last year:

My_School_sm.jpg

I was full professor at Kyoto - that's a nice University.

I figured places like DeVry would use something along this approach. Makes sense for getting an individual trained in the shortest period of time, but might not be for everyone. You'd likely loose any sense of "well roundedness" as well, as some subjects such as humanities would have to be eliminated to shave time.

Yeah, there's a difference between "an education" and "training". ;)
 
They do it just takes time. 1 year, sometimes as much as three. ;) This is the difference between Universities and a good engineering school like MIT. I worked at Kyoto University, Aichi University, and HAL sogokakuen. HAL is a technical college. Officially in English: "HAL Institute of Computer Technology". Beautiful building, fun place to work! Here's a photo I took last year:
It seems like it takes longer for engineering. Closer to 10 or even 15 years behind in some cases. :eek: :p

Yeah, there's a difference between "an education" and "training". ;)
True, but the really sad part is, I've met plenty of both, and the DeVry people knew more about what they were doing. :eek: :D They got far more hands-on, and the experience made a big difference. Given enough time, the uni educated will hopefully learn. I did, but I also had the benefit of learning from some really good engineers at Lockheed just shy of retirement. I always felt I learned more from those guys than I ever did in a classroom for undergrad material.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.