GHz Degression

Discussion in 'Mac Pro' started by Foxdog175, Jun 5, 2012.

  1. Foxdog175 macrumors regular

    Joined:
    Apr 3, 2008
    #1
    Can anyone explain why over the years, the GHz speed has decreased? My early '08 Mac Pro is an octocore 3.2GHz machine. Why are the 2010 models 2.66/2.96 GHz at the high end? I'm sure the 2010 models (and soon to be 2012 models) are much faster -- I just don't understand why the speed went down. From a marketing standpoint, isn't that underwhelming?
     
  2. maflynn Moderator

    maflynn

    Staff Member

    Joined:
    May 3, 2009
    Location:
    Boston
    #2
    Mostly because GHz is no longer the major factor for performance. There are other more efficient ways to make CPUs faster other then boosting the cycles.
     
  3. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #3
    Because it hasn't. If you keep the core count constant in the comparisons (i.e., look at the turbo mode for the newer generations of the most equivalent processor) the GHz has generally gone up.

    Generally, you can't both add substantially more cores and crank up the GHz at the same time. However, they can insert modes where some of the cores are turned off and crank up the GHz just on that subset. Over several tick/tock iterations that subset of cores will be just as large or larger than the full set of the older model.




    Only if marketing to people who actually don't measure power efficiency, throughput, and performance. Those have all gone dramatically up.
     
  4. xgman macrumors 601

    xgman

    Joined:
    Aug 6, 2007
    #4
    In a lot of ways it actually is, at least for me. Play around with some stock and overclocked PC's and you will see. I also think Intel is actually holding back on raw ghz for marketing, avoiding a regulation prone monopoly, and other reasons.
     
  5. Umbongo macrumors 601

    Umbongo

    Joined:
    Sep 14, 2006
    Location:
    England
    #5
    The speed went down because Apple stopped offering the top processor options from Intel; which would have been 3.33GHz at the time of launch. These top processors were more expensive - $1600 each vs. $1280 each for the ones in your Mac Pro - and Apple had already increased the cost of dual processor systems a lot. They also run hotter than those Apple have used in the current Mac Pro design - 130W TDP vs. 95W TDP. Your Mac Pro has 150W TDP processors, but it also has extra cooling over the lower models in the line and a different design.

    So I would say a combination of price and cooling just made it not worth it for them. Actual Intel clockrates have continually risen since the move to Core architecture from Pentium 4 and you can now purchase several Intel processors at stock speeds of over 3.5GHz including Xeons at 4.4GHz if you so wish.
     
  6. xgman macrumors 601

    xgman

    Joined:
    Aug 6, 2007
    #6
    which xeon model is at 4.4?
     
  7. Litany macrumors member

    Joined:
    Jun 5, 2012
    #7
    No it hasn't.
    First. "Turbo" is pure marketing, CPUs never actually see those high speeds in the real-world since the OS spreads work across the CPUs, not just dumping everything onto one or two.

    Second. The Pentium 4 hit 3.8GHz in 2004, over 7 years ago. So, no, GHz has not "generally gone up".
     
  8. Umbongo macrumors 601

    Umbongo

    Joined:
    Sep 14, 2006
    Location:
    England
    #8
    x5698
     
  9. Litany macrumors member

    Joined:
    Jun 5, 2012
  10. Umbongo macrumors 601

    Umbongo

    Joined:
    Sep 14, 2006
    Location:
    England
    #10
    And Pentium 4's at 3.8GHz were single core. Fact remains that Intel processors are available at higher clockrates than they have been in the past.
     
  11. deconstruct60, Jun 5, 2012
    Last edited: Jun 5, 2012

    deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #11
    Take a look at the single threaded Cinebench benchmark here.

    [​IMG]

    Source: http://www.anandtech.com/show/5553/the-xeon-e52600-dual-sandybridge-for-servers/10

    See the E5 2660 2.2GHz edging out the 2.93 GHz from the previous architecture generation? The multi-threaded result is similar. In the SQL Server benchmarks the 2.2 , for the most part, holds its ground against a previous generation 2.66 (and there are some 6C versus 8C metrics to show the differences).

    There are choked software applications. The 3DS Max Architecture 2012 benchmark on the next page the 6C version beats the 8C one. So yeah... turbo does work "in the real world".



    First, if a bonehead OS scheduler pissing the performance into the crapper the problem is the bonehead scheduler. If have 5 relatively slow tasks there is no good reason to push them onto more cores.

    Second, if actually have a high amount of workload to do, then don't need the higher speeds because they have more cores. The additional math/branch/computation function units get more work done.

    Again can go back the graphs in the anandtech article where measure the SQL Server response time. The E5 2660 2.2 turns in better times when there are 8 cores ( which can't clock quite as high) as the more restricted 6C (which can clock a bit higher in some subcontexts.) Cores trump GHz on an extremely wide variety of workloads.



    First, the P4 base design sucked. That why Intel threw it into the trashcan.

    Second, it is an insanely silly metric. Maybe Instructions per cycle or some other single throughput metric is material. But the GHz is just how quickly the clock goes up and down. It doesn't measure "work done". It is like going out and fixating on the gas engine with the highest RPM rating as being the best.
     
  12. ivoruest macrumors 6502

    ivoruest

    Joined:
    Jul 12, 2010
    Location:
    Guatemala
    #12
    My Macbook Pro 2011 with Intel Core i7 Sandy Bridge (2760QM) runs at 2.4Ghz with 4 cores and it is faster (rated by Cinebench 11) than my iMac 2010 with an older Core i5 Nehalem (760) that runs at 2.8 with 4 cores. :D
     
  13. VirtualRain macrumors 603

    VirtualRain

    Joined:
    Aug 1, 2008
    Location:
    Vancouver, BC
    #13
    A short article...

    http://spectrum.ieee.org/computing/hardware/why-cpu-frequency-stalled

    In effect, issues with physics related to heat, power, and current leakage make it prohibitive to push CPU frequencies beyond 4GHz. The cooling equipment required is just not practical.

    There are much more efficient ways to get performance through parallelism, increasing the instructions per clock, and cache which is the direction Intel and AMD headed after the Pentium 4 essentially hit the thermal wall with it's complex, power hungry approach in 2004.
     
  14. Demigod Mac macrumors 6502a

    Joined:
    Apr 25, 2008
    #14
    For decades the hertz rating was indeed a reliable indicator of CPU performance (Intel invested a ton of marketing into promoting this idea, hence it still persists to this day)

    Then at 3-4 ghz they hit the brick wall of the laws of physics. Anything faster can easily overheat and drain a ton of power. So instead of increasing the clock speed they now look for other ways to improve performance, including but not limited to adding more cores.

    So now the clock speed is only a reliable performance measurement when you compare families of CPUs from the same generation. That is how a new 2.4 ghz CPU can outperform a 2.8 ghz CPU from an older generation, as someone here already mentioned.
     
  15. derbothaus macrumors 601

    derbothaus

    Joined:
    Jul 17, 2010
    #15
    Fairly incorrect. Get yrself a Win install and check the core speeds during various actions. They throttle all over the place. You idle at the specified speed sometimes lower. Same with OS X. Clock speed is not controlled in the OS. Look at Tudor's post regarding turbo tweaking. Lower your default clock and raise the turbo bins and get even better performance not less. Most SW is still single thread leaning.
     
  16. Foxdog175 thread starter macrumors regular

    Joined:
    Apr 3, 2008
    #16
    Thanks for the replies, everyone. The 'brick wall' so to speak, that Intel hit, sums it up for me. I wasn't aware that they ran into that issue.

    Do you think there will be a point in time when CPU manufacturers won't be able to 'outpower' the previous generation of machines due to power consumption issues?
     
  17. Tutor macrumors 65816

    Tutor

    Joined:
    Jun 25, 2009
    Location:
    Home of the Birmingham Civil Rights Institute
    #17
    View of a paradoxical contrarian

    Right on. User adjustable underclocking and turbo biasing rule on Nehalems and Westmeres. Sandy Bridge E5s build it in (but much less than I would prefer), but are fully locked done so the user cannot customize it. So as for top of the line dual eight core Sandy Bridge E5 systems you'll see some that bench below my intentionally underclocked dual 5680x [3.3 GHz (factory) -> pushed down to 2.5 GHz or less] and some that bench better, but not by much (I mean less the ten percent at max). Yet if you compare the deltas between the benches of my underclocked dual 5680s and those of standard (and even overclocked) dual 5680s you'll see why I say, "If you want a fast system then make it slow; if you want an even faster system then make it even slower; but if you want the fastest system make it the slowest of them all, all the while allocating that otherwise unused potential to turbo biasing." There, the deltas are about 1.5 times higher. The turbo ratios (for each of my dual 5680s) are DDDDEE (or 13, 13, 13, 13, 14, 14). My view is: It's not all about running fast all of the time, but rather, it's about running the fastest when it really counts.
     
  18. derbothaus macrumors 601

    derbothaus

    Joined:
    Jul 17, 2010
    #18
    Thats what happened with P4 and why they are using cache levels and thermal throttling to give you what you need when you need it as opposed to 4GHz at all times. They are merging all processor tech. Multi cores and cache leveraged from motorola days. Short pipes like P3. Power throttle like centrino, 3D transistors etc...
     
  19. VirtualRain, Jun 5, 2012
    Last edited: Jun 5, 2012

    VirtualRain macrumors 603

    VirtualRain

    Joined:
    Aug 1, 2008
    Location:
    Vancouver, BC
    #19
    I don't think physics will be the limiting factor. It's more of an economics issue than a technical one in my mind.

    10 years ago there use to be a couple dozen semi-conductor companies that could invest what was necessary to drive to the next level of performance (typically a die-shrink, although there's much more to it now). Now-a-days, we're down to maybe two or three such companies. When it gets to the point where Intel is the only one that can afford to drive the (continuously increasing) investment necessary to innovate and drive CPU performance to the next level, we may see their incentive to make such massive investments drop off, and as a result, CPU performance might hit a plateau for a while until someone comes along that finds a way or develops new technology to drive things to a new level much more affordably. In other words, once this point of non-competitive pressure is reached, it may take some kind of disruptive change to kick-start a new hockey stick like acceleration in CPU performance increases.

    But all this is just my opinion.
     
  20. derbothaus macrumors 601

    derbothaus

    Joined:
    Jul 17, 2010
    #20
    Also it has only been a few years since Intel added back Hyperthreading. Which was huge. And the reason little ol' iMac's and Macbook Pro's even compete with towers these day's. They jumped 50% performance in one tock. It was not that long ago. Have patience. i7 destroyed the slow gains Core 2 was making with clock bump after clock bump.
     
  21. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #21
    It is not the power consumed that is the primary issue. It is getting useful work actually done with the power consumed that is the issue. ( not leaking out having done nothing productive or cycles spent stalled effectively doing "no-op" , nothing, because don't have data to do work on. )

    It is being more effective with the power you do have. Not an ever growing Thermal/Power budget. There is much more can be done. The more recent significant advances in power management all came to market well after that 2008 article. Neigther Intel nor AMD has exhausted the possibilities there.

    Yeah there could be a time. When it stops being cost effective to shrink to smaller geometries. If the correlation goes something like $4B plant to make 22nm, $8B to build make 12nm , $14B to make 10nm, ..... then
    the plants to make this stuff will get so expensive only extremely high volume stuff can be made in them. The magnitude of the generation shifts is going to get smaller. At 22nm it is going to be tough to knock another 20nm off the size compared to going from 48nm to 28nm.

    CPU manufacturers are going to go away. They'll need to put other stuff than a CPU on the die to justify a high enough unit price to offset the plant infrastructure fees. They aren't going to be "CPU" manufacturers. They are going to be essentially computer manufacturer. A large amount of the whole computer will be just one die and a smaller amount of I/O that just needs to be larger to connect on the outside.

    In short, once stop myopically focusing on making the clock go up/down and start to make computation more efficient there is lots to do. Implementing functionality in hardware for one (e.g., Ivy Bridge packages being able to do next gen HD 4K image manipulation. ) that have only really just scratched the surface on.
     
  22. velocityg4 macrumors 68040

    velocityg4

    Joined:
    Dec 19, 2004
    Location:
    Georgia
    #22
    Not really though it just seems that way since each generation usually had much clock speeds. For instance a 16 MHz 68030 was faster than a 16mhz 68020 or a 25mhz 386 DX was slower than a 25mhz 486 DX. In each case the newer generations had much higher maximum clock rates.

    Even then architecture mattered as much as clock speed. The same has held true most processor generations. The P4 being a notable exception as the Pentium III was faster clock for clock. Though as I understand it the Pentium M then Core architectures are actual successors of the Pentium III.
     
  23. deconstruct60 macrumors 604

    Joined:
    Mar 10, 2009
    #23
    It won't be just one. The number of companies doing SoC packages is at least as high now as it was 10 years ago. You don't have to "own" to make a chip. You just have to lease time in one of the fewer players that does own. Distributed costs make the more expensive factories viable. The only problem is have to pony up that even that fractional share. Joe-Average can walk up to NetJets and get a seat on a time-share jet. Neither will relatively super low volume CPU vendor get time on the bleeding edge lines either.

    As long as folks aggregate R&D cost contributions they can keep up. That is how ARM keeps up. They take money from a growing number of "customers" and can keep pace on R&D in their space. That is sustainable against the Intel juggernaut because they are smart about how they spend their time and effort.

    TSMC and Global Foundries will likely keep up a relatively short distance behind. If IBM exits the business and R&D that would be more critical to handing Intel the Ring and crowning Intel, Sauron the Great .

    Nobody is going to out Sumo wrestle Intel. The easiest way to avoid that though is not to start a Sumo wrestling match with them.
     
  24. VirtualRain macrumors 603

    VirtualRain

    Joined:
    Aug 1, 2008
    Location:
    Vancouver, BC
    #24
    I agree with you whole heartedly, but it's getting to the point as you said in your post above, that the investment necessary to build a foundry for the next node is getting prohibitively expensive. How long can TSMC and Global keep pouring money in at this rate?
     
  25. theSeb macrumors 604

    theSeb

    Joined:
    Aug 10, 2010
    Location:
    Poole, England
    #25
    And they were slower. What is this argument all about anyway?
     

Share This Page