How much faster will Penryn Mac Pros really be?

Discussion in 'Mac Pro' started by netdog, Dec 10, 2007.

  1. netdog macrumors 603

    netdog

    Joined:
    Feb 6, 2006
    Location:
    London
    #1
    Considering that I do a fair amount of image editing (Aperture, CS3) and some HDV video (though this comes in bursts and I am not manipulating video most of the time), how much faster would a new 2.83 8-core (which I suspect will be a 1333 FSB) be than a 2.66 4-core for my purposes?

    I understand that the video rendering may be more than twice as fast, but will anything else really be much faster? Can Leopard or CS3 or Aperture really take advantage of those extra cores? Will I really be buying a machine that will last so much longer if I hold for a new one rather than buying a 2.66 at a reduced price when the new Penryns are released? Honestly, more than anything else, I need dual-link DVI.
     
  2. netdog thread starter macrumors 603

    netdog

    Joined:
    Feb 6, 2006
    Location:
    London
    #2
    I do know that the current 3.0 octo-cores don't have any advantage over the current 3,0 quads for many functions. Is Leopard 10.5.2 likely to change that much? Revisions to CS3? Is the Penryn machine going to be that much faster? Would a 2.66 at a bargain price be a good move now?
     
  3. Umbongo macrumors 601

    Umbongo

    Joined:
    Sep 14, 2006
    Location:
    England
    #3
    A 2.83GHz Penryn is going to be 10%+ faster for many content creation tasks, due to the MHz increase and penryn improvements. The better memory controller, and other features may also benefit you. If your usage sees you waiting for processing to occur for many hours a week you may see real advantage by getting the new MP over the life of the machine.

    High end programs are likely to become more and more multi-core aware, thats the way processors seem to be going, and it's not beyond the realmof possibility that those changes will happen sooner rather than later. Also having 8 cores may offer you new usage options that hadn't occured to you before.

    That said, many people are going to be using quad 2.66GHz Mac Pros for years to come and will be perfectly fine with them, if you can get a good saving don't consider it a mistake to buy what is already available. They aren't suddenly going to become useless overnight.
     
  4. Matek macrumors 6502a

    Joined:
    Jun 6, 2007
    #4
    A new revision of OS X cannot make much difference, it's the apps that need to be modified for multi-core performance, so if anything, a revision to CS3 could improve things. Also, a large part of applications can only be threaded to a certain point. When you reach that point, even infinite cores won't help you. All in all 8 & 16 cores are still overkill and more of a marketing trick (double the cores doesn't equal double the performance most of the time) as anyone who knows a little about computer architecture will tell you.

    Penryn machines aren't going to be that faster just because they are Penryn. The biggest advantage is SSE4 (which, again, needs application support) and a smaller fab. process that will allow less power consumption, less overheating and higher frequencies. It's not an entirely new architecture, just an improvement to the current chips. That 2.66 you're mentioning might be a good choice if you can get it cheap.
     
  5. TyleRomeo macrumors 6502a

    Joined:
    Mar 22, 2002
    Location:
    New York
    #5
    But if you use Compressor or After Effects then you do get double the performance. It's a workstation, designed for the most rigorous rendering around and those apps will be happy to take advantage of all 8 cores. Read some reviews at barefeats.com
     
  6. Octobot macrumors regular

    Octobot

    Joined:
    Oct 30, 2006
    Location:
    MPx12
    #6
    For those who render 3D and Video.. sufficiently.
     
  7. deathshrub macrumors 6502

    Joined:
    Oct 30, 2007
    Location:
    Christmas Island
    #7
    I'm not really sure what you want to know. The FSB will be higher. The memory will be faster. Yes, the processors will be faster. What are you trying to find out from us?

    Apps that may not be multi-core aware right now will beyond all probability be multi-core aware in the future. Seems that we aren't going back to processors with one core, so I do not see how software developers will get away with NOT enabling their software to take advantage of the large number of cores available in today's machines. ESPECIALLY CPU intensive apps like CS3, etc.

    Dual link DVI is included on every graphics card sold with the Mac Pro currently. If you really need high graphics performance, get the X1900. You probably don't need the added cost and marginal performance of the quadro unless you are doing CAD work.
     
  8. gnasher729 macrumors P6

    gnasher729

    Joined:
    Nov 25, 2005
    #8
    When you say "apps can only be threaded to a certain point", sometimes they can be threaded more, but it wasn't worthwhile with few cores.

    Lets say an application has to do two things, A and B. A takes 30 seconds, B takes 100 seconds, both can run in parallel, and B can be split into any number of threads. Your time is:

    One core: 30 + 100 = 130 seconds.

    Two cores: After 30 seconds, task A is finished, 70 seconds of task B left. That is distributed between two cares for 35 seconds, total = 65 seconds.

    Four cores: After 30 seconds, task A is finished, 10 seconds of task B left. After another 2.5 seconds, everything is finished, total = 32.5 seconds.

    Eight cores: After 30 seconds, task A is finished on the first core. Task B was finished after 14.3 seconds, and seven cores have been sitting around idle for almost 16 seconds. Total = 30.0 seconds.

    As you see, with "only" four cores splitting task A into threads was pointless. But with eight cores, doing so would almost double the speed, so now some programmer would start working on it.
     
  9. gnasher729 macrumors P6

    gnasher729

    Joined:
    Nov 25, 2005
    #9
    Type "Intel Price List" into Google, and you may find that a quad core 2.5GHz Penryn for $316 is planned. That is cheaper than the currently used dual core 2.0GHz Xeon. Two of these are much less than _one_ current 2.66GHz dual core. So Apple could build a relatively cheap eight core 2.5 GHz machine. That is what I would hope for.
     
  10. takao macrumors 68040

    takao

    Joined:
    Dec 25, 2003
    Location:
    Dornbirn (Austria)
    #10
    yeah the next years gonna be awesome for anybody who's into parallel programming .. those core numbers will go insane within the next few years (intel already showed prototypes with more than 64)
    nehalem xeon cpus are said to scale up to 32 (on 2 dies in one package) already in mid 2009

    looks like all those codes for big parallel shared memory machines used a few years ago can now be pulled out of the hat again

    doesn't Penryn also add SSE4 ? i would call that quite a significant thing
     
  11. Tracer macrumors 6502

    Joined:
    Jun 20, 2007
    #11
    SSE4 will be a big deal for Video Encoding.

    The new instructions will help a lot when it comes to encoding DivX and H.264 Video.

    There are some benchmarks on Google that show huge performance gains with H.264.

    Tracer
     
  12. Matek macrumors 6502a

    Joined:
    Jun 6, 2007
    #12
    I just read one of the reviews you mention and they didn't get anywhere near close double the performance in their benchmarks. In Cinebench and Geekbench, the 8-core CPUs were 45% and 56% faster, in Photoshop CS3 and aperture 0% and 7% faster and in Quicktime 38% faster.

    All the tested apps were optimized for multi-core work. Click.

    You have a point there, although I was talking about the cruel hard limit of the code. In every program, things can only be threaded to a certain extent. On average, this number is close to 60%, I believe, but in apps we are discussing here, it gets higher then that, let's say 90% or more. This may seem a large number, but some people disagree. There is a little something called Amdahl's law, it's a law that helps you calculate how much the application speed can be increased by adding cores.
    [​IMG]
    This graph (linked from the mentioned wiki article) demonstrates this on three examples, software with half of un-threadable code, one fifth, one tenth and no such code at all. You can see that even with software where 90% of the code can be threaded, adding more and more cores gets increasingly pointless.

    See, this is what I'm afraid of. Chip makers are trying to make more money and use more and more cores as a marketing trick to sell you stuff. Back in the day, an identical chip with twice the frequency always meant twice the performance, while nowadays twice the cores almost never does, but they still make it look that way.

    Don't get me wrong, multicore is really useful for a small number of special applications and has been used in science for a long time, but the consumer market is much less appropriate for it and the more core numbers increase, the less we get for our money.
     
  13. takao macrumors 68040

    takao

    Joined:
    Dec 25, 2003
    Location:
    Dornbirn (Austria)
    #13
    marketing trick ? would you prefer them going back to the stupid idea of pushing clock frequency farther and farther ? i seriously prefer multi core architectures .. after all i like to either have many small programs open (which means many threads) or have big programs open using many of threads
    not only is it better in the second case but also in the first because of the reduced latency since an actual software thread can be running on actual hardware without an context switch (which are eating an aweful lot of cpu cycles on normal 1 threaded cpus)
    also it's an aweful lot better to have smaller cores than big beasts like the G5 with it's 5 instruction ILP where it is notoriously difficult to write compilers for


    you know why ? because many developers have been lazy and took the easy road if they don't have to ... aka."let the chip maker make up for it with mhz"

    also another thing about amdahls law:
    the important thing is "time spent in code" not code... normally 90% of the time is spent in 10% of the code ... which most likely is loops ... which then can most likely made parallel

    that said there are plenty of features coming in the next generations of intel CPUs which can easily increase performance a lot if they are used... like the crossbar interconnection between the cores and afaik a shared level 3 cache on die
    don't forget the cache either.. with more cores you very likely get more on-die lvl 1 and lvl 2 cache as well which can improve performance considerable especially with lazy code who puts multidimensional arrays in contiguous blocks
     
  14. Cromulent macrumors 603

    Cromulent

    Joined:
    Oct 2, 2006
    Location:
    The Land of Hope and Glory
    #14
    That is not true at all. The OS is what controls things such as thread priority and management. If the OS is more efficient at this it will automatically improve the performance of applications that are multithreaded.
     
  15. fernmeister macrumors regular

    Joined:
    Aug 19, 2007
    #15
    I can't comment on what is right for "consumers" but for Digital Audio,the chip makers seem to be doing the right thing. For a while now it has been the case that more cores and more RAM are the way forward. That's why expansion via PCI DSP cards and external processing has been so popular - it's another way to add cores!

    Apple would be daft not to try and exploit more cores as a way to get more plugins and instruments out of each Logic session, thereby eating into the DSP market.
     
  16. Matek macrumors 6502a

    Joined:
    Jun 6, 2007
    #16
    The coders need to improve applications first, if that's not done, OS-level optimization is much more useless.

    The marketing trick I have in mind is that chip-makers present their products in a way that makes people think multicore processors will give them double the performance for double the cores, which is not true. I don't see what's so stupid about higher clock frequencies, an increase in frequency can be easily translated to better performance and is also easier to understand for customers. Chip makers started (not the only reason, of course) using the multicore approach because they ran into material problems that prevented them from easily gaining higher frequencies.

    Reduced latency is a valid point although it's questionable how much (if at all) it affects the end-user. I agree with you, multitasking is a good use for multiple cores, but that's now, using 2, 4, 8 cores. As someone mentioned, there are plans for 32, 64 and more cores on a single processor, which gets increasingly pointless.

    Heh, lazy coders are a fact, of course. Everyone knows the good old "It doesn't matter how good the hardware is because the software boys will piss it away", but I think that's the hard cold truth we have to accept.

    You're right again about the Amdahl's law, although again, the more cores we add, the more pointless they are. We need to stop at a reasonable limit.

    Can't argue about cache either, but what you say is merely a consequence, more cache could easily be added to processors with less cores too, right now it's just not (financially) worth it.

    All in all, I probably have gone a bit out of line, yes. My comments here were directed mostly at the mainstream market. Some people commented very specific professional use where multicore cpus will certainly come in more handy (although a lot can be done, we need better memory performance to utilize 16 and more cores, monolithic design takao mentioned wouldn't hurt either), while home users will still profit much more more from actual architecture improvements and higher frequencies then from a large number of cores.

    OFFTOPIC: Pleasantly suprised at the number of people with great knowledge on this topic :D.
     
  17. takao macrumors 68040

    takao

    Joined:
    Dec 25, 2003
    Location:
    Dornbirn (Austria)
    #17
    reduced latency is _the_ most important feature on a desktop ... (at least for me ... and OS X is hardly great at that anyway.. especially on my G4 ;) )

    why modern operating systems have just in idle more than 16 threads running (ignoring the fact that with nehalem oder penyrn hyperthreading will sooner or later also come back)

    true but why not getting both ? the road with very complex cpus has proven to be way too trouble some not only in terms of actual performance gains (compilers, CPU design) but also in terms of heat (hot spots)and power usage .. also newer multi core cpus will simply suspend cores into idle if enough power is available

    after all what are you going to use all those free die estate freed up through new fabrication processes ?

    about frequency: the core 2 architecture is massivly overclockable right now and will only increase in that department with newer processes so higher clockrate could be easily achieved later down the road anyways
     

Share This Page