So what really is the big deal with Broadwell v Haswell?...

Discussion in 'MacBook Pro' started by Kal-037, May 11, 2015.

  1. Kal-037 macrumors 6502a

    Kal-037

    Joined:
    Apr 7, 2015
    Location:
    Depends on the day, but usually I live all over.
    #1
    So aside from features we've been told would come from newer chips, what is the main draw? I am not super familiar with processors these days, I have only been learning the basics of the new (i3, i5, and i7 processors.) Before that I mainly went with the cheapest processor and one that would fit my motherboard on a desktop PC I was building (The days of pre-Pentium 4) I'm beginning to see what these new "core" processors offer.
    But my question is... Haswell is 22nm right? So keeping it cooler and having it Over-Clock is easier right due to it's size? Hasn't each die shrink, reduced over-clock?
    (A restriction of physics right?... the transistor density increases and takes/eats more power as the component is made smaller, but isn't there then less area to radiate heat. So if I have two 2.7GHz chips, won't the one with the bigger size be easier to cool, and be capable of getting better over-clock speeds?

    I guess I am asking with the shrinking of Haswell's 22nm to Broadwell's 14nm... won't the OC speed decrease, and won't keeping it cool be more difficult? So what really will these new chips bring to CPU performance? I know the GPU performance will be better but how do they achieve better CPU and battery performance?
    Again I'm no genius, and much of this is stuff I've picked up while googling (trying to figure out each the processors features and whatnot.) I really am just curious as to how Broadwell will be able to be "better" than Haswell.

    ...Or am I asking the wrong question? Should I ignore Broadwell and ask about Skylake? lol :)


    Thanks for any help, sorry for so many questions and my ignorance.



    Kal.
     
  2. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #2
    Over clocking in the 4Ghz+ region that is done on desktop has gotten worse the last generations. Basically the chips are less and less efficient at those high clock rates. In lower clocks <3Ghz the chips need less Volt each generation and use less power. That is what matters for notebooks.
    Also the GPU is on the same chip and the TDP is unchanged so if each part uses less power the gpu or cpu actually can clock higher under load before the reach the combined tdp limit.

    At the very low power <5W the efficiency and architecture improvements did a lot. Which is why the broadwell power notebooks show quite significant battery life improvements. Basically the 15W chips can do under low load the same long battery life that the 4.5W Core M chips manage.

    Actual performance increases are minimal outside of higher clock rates. But over clocking issues only apply past a certain clock rate, because each architecture and manufactureing process is aimed at certain clock rates and the resulting chip will be in efficient outside of that range.

    Ie. in a lot of smartphone SoC these days they pair slow 1.3Ghz A53 with four 2Ghz A53. 8 cores in total. Why not just do 4 cores and clock them higher and lower. Because you can tailor them and the will be most efficient at only a certain clock rate.
    Intel disregards the high end over clocking crowd who push past 4.5Ghz and simply doesn't build chips that support higher and higher clocks. They largly keep the clocks but try to get more performance from each cycle and improve power draw and the low clock rates.
     
  3. Kal-037, May 11, 2015
    Last edited: May 11, 2015

    Kal-037 thread starter macrumors 6502a

    Kal-037

    Joined:
    Apr 7, 2015
    Location:
    Depends on the day, but usually I live all over.
    #3
    Thanks so much for the information.
    I think I'm just trying to talk myself out of selling my current MBP for a Skylake or Broadwell, it seems like I'd have to sacrifice actual performance for better battery and a poor GPU. When what I really need is CPU and an actual dGPU. I am majoring in animation and film... and iGPUs don't seem like they'll be up to the task anytime soon. Even the 2012 GT650m is looking like it still will be as good as what Skylake might throw out.


    So the CPU won't get much worse or better with Skylake or Broadwell, I guess I'm just hoping Apple gives their retina MacBook a newer series Graphics Card. I'll just have to pay the higher price for a higher CPU. (Ugghhh) lol


    Kal.
     
  4. Samuelsan2001 macrumors 603

    Joined:
    Oct 24, 2013
    #4
    Broadwell and skylake will probably bring a new dGPU option in the 15 inchers as intel still aren't there with the iGPU. Unfortunately Broadwell hasn't been released yet (at least not the H series suitable for the 15 inch). It may also get skipped with intel moving straight to skylake.

    So there are a few things that might happen. If broadwell is released they will most likely use it and a new dGPU sometime this summer. If not I don't see an update until early next year with skylake and whatever dGPU is good at the time..
     
  5. dusk007 macrumors 68040

    dusk007

    Joined:
    Dec 5, 2009
    #5
    Broadwell will be about 20-25% faster in theory in the GPU. The memory bandwidth is an issue and the 6100 was basically no faster than the 5100. One can only guess what it will actually be with the 6200. Skylake with a big Iris Pro GPU won't be around for a good long while. It is supposed to be 72 EUs and 50% faster than Broadwell. That will definitely put it ahead of a 650M or 750M but relative to Maxwell or Pascal(which will probably be out along side that Skylake Iris Pro chip) it is still quite poor. Considering they run a supposedly better 14nm process than anybody else has access to and they cannot even compete with Maxwell on 28nm suggests their architecture needs work. That they only claim about as much of a speed increase as they up the EU count suggests their architecture will not see significant changes.
    They dedicate a lot of space to the gpu on the chip but they don't seem to put much effort in still. The worst is how their 16EU gpu on the 14nm atom compares to ARM Mali & Co. They really should just license Mali designs and improve on those. They got more money than all their competition combined. They can do anything but they seem content with their current GPU architecture.
    Maybe once HBM comes in and they start using it there will be a significant change but until then the dGPU of Nvidia will be out of reach in performance.
     

Share This Page