The 3Ghz wall?

Discussion in 'Mac Pro' started by JesterJJZ, Mar 11, 2010.

  1. JesterJJZ macrumors 68020

    JesterJJZ

    Joined:
    Jul 21, 2004
    #1
    I have what may be a simple question. I know the megahertz myth and how it relates to computers, but I do want to know why we seem to have slowed to a crawl moving past the 3Ghz mark.

    In the summer of 2004 i got my 2.5Ghz DP G5, later in 2007 replaced by Apple to a MacPro 1.1 2.66Ghz Quad.

    I've had a 2.53Ghz Pentium IV HP since about 2003. The thing can barely chug along anymore.

    Yet over the past 7 years we've been dancing around the 3Ghz number and just adding more cores. Will we ever see a 4Ghz processor?

    I am not complaining, I'm just curious and I'm sure you guys can answer this for me.

    Thanks! :)
     
  2. 300D macrumors 65816

    300D

    Joined:
    May 2, 2009
    Location:
    Tulsa
    #2
    Nope. More cores is the future of computing.
     
  3. Umbongo macrumors 601

    Umbongo

    Joined:
    Sep 14, 2006
    Location:
    England
    #3
    And smaller dies make it possible for faster clock speeds.

    Jester,

    Next week Intel are releasing a 3.46GHz processor that turbo boosts to 3.73GHz. Intel can offer 4GHz processors now if they wanted, and said they could back with Penryn. There are two big reasons they don't. One is that despite all the claims of the overclockers who push these processors that high, they are not as stable at such speeds, this means supply would be low due to the way Intel create processors (they create a batch and smaller numbers work at faster speeds). The bigger reason is that they don't need to from a business perspective. Companies are happy with stability and enthusiasts can have 4GHz machines from a $300 processor ($200 with some discounts). There is also no competition, Intel control the market at such performance levels. If AMD had processors that powerful so woul Intel.
     
  4. mason.kramer macrumors 6502

    mason.kramer

    Joined:
    Apr 16, 2007
    Location:
    Watertown, MA
    #4
    Sort of?

    Many (maybe even most) computation tasks are inherently sequential. These tasks can only be sped up when given a core with greater instructions per second (= instructions per clock * clock speed).

    Other tasks have tiny amounts of currency that can only be exploited with vector instructions + SSE registers which are concurrency *within* a single core.

    Since we aren't up against theoretical limits, I think it's safe to say that IPS per core will continue to increase in the future.
     
  5. DualShock macrumors 6502

    Joined:
    Jun 29, 2008
    #5
    Don't forget about the laws of physics.

    A few yrs back (up until around 2005) everyone (except maybe AMD) kept trying to crank up the GHz on their CPU's, only to find out that they ran way too hot. Clearly marketing was running the show back then. The law of diminishing returns started to kick in. I read somewhere that the amount of heat generated per unit area in P4 CPU's of the time was approaching that of a nuclear reactor.

    That's why we never saw a 3 GHz G5. A good CPU architecture (or more cores plus multithreaded software) is much better than just raw speed. If you look at reviews of Athlon 64's around that time, they ran circles around P4's whose clock speeds were nearly 1 GHz faster.
     
  6. Dr.Pants macrumors 65816

    Dr.Pants

    Joined:
    Jan 8, 2009
    #6
    Speaking of the laws of physics, electrons can only move so fast across a given area. Methinks doubling the processor speed halves the distance that light can travel "instantaniously" - go over that limit and then you have to take time into effect which then increases the complexity of the processor.
     
  7. DualShock macrumors 6502

    Joined:
    Jun 29, 2008
    #7
    Exactly. The Pentium 4's pipeline was known for having "do-nothing" stages added that simply allowed for electron propagation.
     
  8. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #8
    But remember, the voltages are also lowered (thermal reasons come to mind). Stability comes at higher voltages, but also generates too much heat for the die size to withstand on a consistent basis.

    There's other issues related to lower voltages as well in terms of stability (i.e. differential between "0" and "1" is getting low enough false bits can occur using existing voltage regulator specifications @ 5%).

    Heat is a major issue, and why the tech needs to shift the frequency of the electrons from electrical to optical. Solves thermal issues, power consumption is low, and is capable of insane speeds compared to what we have now.

    Research has been on-going for years (since the '80s that I know of), and it's getting closer to reality for mass production. It's been slow, but it's getting there.
     
  9. JesterJJZ thread starter macrumors 68020

    JesterJJZ

    Joined:
    Jul 21, 2004
    #9
    I always thought that something like an optical processor would be next. Would surely lower the heat if all the connections ran through light beams no?
     
  10. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #10
    Absolutely. :D
     
  11. jakesteramma macrumors member

    Joined:
    Oct 10, 2007
    #11
    I'm no pro but I've always thought it was related to power consumption and heat. The faster you push a processor, the more heat produced which at some point will destroy the processor. Also, to speed up the processor's speed requires more power.

    These days with more and more mobile devices that are smaller and smaller, heat and power consumption are big issues. So it seems to me that that the processor manufacturers have concentrated on better designs that produce less heat and use less power.

    If they just simply pushed the speed faster, I think we'd end up with bigger laptops because they'd need bigger fans and larger batteries.

    That's my 2 cents. Correct me if I'm wrong.
    :)
     
  12. chaosbunny macrumors 68000

    chaosbunny

    Joined:
    Mar 11, 2005
    Location:
    down to earth, far away from any clouds
    #12
    Well, the real reason for me is that it woudn't make sense for intel or amd to release faster than necessary processors - be it core numbers or ghz. Of course they don't want to sell you a processor that will last 10 years, they want to sell you a new one every 2 years and want you to throw the old one away. That's the deal of todays monopolism & the necessity of steady growth for shareholders, not the necessity to deliver good products for customers.
     
  13. macuserx86 macrumors 6502a

    macuserx86

    Joined:
    Jun 12, 2006
    #13
    I don't mean to rain on your parade here, but the i7-980x (the new hexacore) and the i7-975 have both been over-clocked from their stock 3.33GHz to 4GHz on air. Higher is possible with water cooling; the highest that seems to be stable is around 4.2GHz.

    There's really no barrier here lol
     
  14. 300D macrumors 65816

    300D

    Joined:
    May 2, 2009
    Location:
    Tulsa
    #14
    Just 4.2ghz and water cooling (remember that nightmare?)
     
  15. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #15
    What you have to remember, is the parts produced have to be able to hold to a minimum spec (consistent parts).

    With OC'ed parts, they can vary wildly. That is, lets say you get exactly the same frequency out of multiple parts. The voltages however, will vary from part to part. Conversely, if you fix the voltage, the frequency will vary (regardless of damage that may occur). It's just not predictable, and they need to sell parts that will meet whatever spec is published for it.

    So that means the rated clockspeeds are lower, and they also run lower voltages which reduces the danger of a CPU from "self-destructing" (electromigration).
     
  16. macuserx86 macrumors 6502a

    macuserx86

    Joined:
    Jun 12, 2006
    #16
    So far, 4GHz has proven stable and easily re-producible (no cherry-picked parts) on just air. Often, a voltage change isn't even required, but it increases stability.
    Yes this does put much more wear on the CPU, but it is easily possible.

    That said, the GHz war has been over since Netburst architecture chips went the way of the dinosaur.
    Now the core wars have begun...
     
  17. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #17
    I've seen a fair bit of variance in i7 SP boards though (same model and firmware, so that's not the culprit).

    But as you mention, core counts have taken center stage (the enterprise market wants more cores for servers/clusters).
     
  18. Full of Win macrumors 68030

    Full of Win

    Joined:
    Nov 22, 2007
    Location:
    Ask Apple
    #18
    You do know this 'wall' has been crossed by the POWER6 design years ago (in production CPU's).

    http://en.wikipedia.org/wiki/POWER6
     
  19. Eidorian macrumors Penryn

    Eidorian

    Joined:
    Mar 23, 2005
    Location:
    Indianapolis
    #19
    The Core 2 (Wolfdale) based Celeron E3900 is going to hit a blazing 3.4 GHz. :cool:
     
  20. JesterJJZ thread starter macrumors 68020

    JesterJJZ

    Joined:
    Jul 21, 2004
    #20
    Yeah I know. I wasn't literally saying 3.00GHz, just around there. I'm also not talking about overclocked and hacked systems. I bet you can clock chips way higher with nitrogen cooling and stuff.

    What I meant was that officially released stock processors seemed to have been stuck between 2.5-3.4GHz for the past 7 years. Just an observation I wanted more info on.
     
  21. macuserx86 macrumors 6502a

    macuserx86

    Joined:
    Jun 12, 2006
    #21
    It's pretty simple really, the trend with Netburst was to have long pipelines and high clock speeds to get work done. AMD was focusing more on multiple cores to get work done. Intel realized that Netburst could only go so far and clock speeds could only go so high before the chip would melt. They tried making the Pentium D which was essentially 2 P4s on top of one another. It was a massive failure (if you disagree then god help you).
    The clock speeds have remained similar, but the architectures have changed radically. Efficiency is the goal, not raw speed.
     
  22. nanofrog macrumors G4

    Joined:
    May 6, 2008
    #22
    Recent systems using LN2 have exceeded 5GHz.
     
  23. Cynicalone macrumors 68040

    Cynicalone

    Joined:
    Jul 9, 2008
    Location:
    Okie land
    #23
    This might be dumb but…

    Aren't we still limited by the mechanical hardware inside of our computers? HDD's for example seem to be a real bottleneck.

    Maybe I'm just not understanding this :eek: but wouldn't we need to let the rest of the components catch up before we increase the processor speeds? Otherwise we might not see any real world gains.
     
  24. alent1234 macrumors 603

    Joined:
    Jun 19, 2009
    #24
    back in the P4 3+ GHz days, AMD CPU's that ran at half the clock speed would outperform the P4. Intel dumped the P4 for the Centrino core which was a modified P3 CPU worked on by their Israeli based R&D people. the C2D and the current i Core CPU's are all based on the Centrino CPU from years ago. they are much more efficient per clock cycle than the P4 which had a deep pipeline but a mistake would mean an instruction would have to be rerun.
     
  25. gotzero macrumors 68040

    Joined:
    Jan 6, 2007
    Location:
    Mid-Atlantic, US
    #25
    It is more of an issue of cost and what is demanded by the market. There have been a ton of processors running just fine at clock speeds way above 3ghz.

    However, most people do not need them for anything, and at the phenomenal cost of more speed, it almost always makes sense to just add more cores right now.
     

Share This Page