Can someone explain multi-core technology?

Discussion in 'MacBook Pro' started by Dookieman, Jun 11, 2011.

  1. Dookieman macrumors 6502

    Oct 12, 2009
    Hi, I was just thinking about how useful multi-core processors actually are? Now, from my understanding, they are a single die which contain two or more processors. How much more efficient is this when compared to a single core processor? Does it simply allow for better multitasking? Also, back in the 90's some computers would have dual-processors, similar to what we have now, only much more bulky and not on one die. Do you still need to code for each processor and break up tasks like in the early days of multiprocessors, or do current SDK's do this on their own? If so, are companies doing this? If not, couldn't they split up tasks to help improve performance with games, rendering, etc...?

    I remember back in the late 90's early 2000's the race was for the processor that had the highest number of Hz. That was a good baseline for performance. Now it seems the race is for how many cores you can have on one die is what indicates the best performance.

  2. sioannou macrumors member


    Mar 25, 2010
    Nicosia Cyprus
    Don't know what was happening back then however I know that yes, basically the programmers have the headache of splitting the tasks into the processors. Even if this is not 100% true basically for example 2.0GHz with 4 processors actually means 4x2= 8 GHz . Now, depend how much talent the programmer has, the better utilization of multicore technology happens thus better performance for the end user.
  3. cube macrumors P6

    May 10, 2004
    Like you said, it's not true that 4x2 = 8 GHz

    It's not about talent, some things have limited parallelism without massive overhead with current technology.

    For some languages, there are now tools which make it simpler to distribute easily parallelizable tasks among an arbitrary number of processing units.
  4. MikhailT, Jun 11, 2011
    Last edited: Jun 11, 2011

    MikhailT macrumors 601

    Nov 12, 2007
    It is all about the limits of the technologies at the time in addition to the power management issue.

    A 120nm single-core CPU (1990’s) running at 5Ghz would be damn near impossible because the heat would kill the CPU at that speed, literally cooking it. You’d need liquid nitrogen to run at that speed. Not to mention at that time, CPUs at that time would only do one instruction per clock. The way the CPUs would work is that the instructions come in one data stream (one core) . So, the system would be slow if you have to do an anti-virus scan, browse the web, transferring files and listen to music at the same time.

    Over time, increasing the clock frequency was becoming too much pain to work with and it wasn’t going to scale well into the future. To work around this, they designed to split the work into two cores on the same die. This let them lower the clock frequency and manage the heat much better. In addition to that, you can do twice as more work per clock. You can now do anti-virus scan, transfer files on one core and browser the web on the second core without feeling the slowdown because you’re doing all of them on two separate cores.

    In the past, there were two CPU dies but they were separated in two sockets on the motherboard. You have to manage the traffic between them at ultra high speed, this makes it extremely slow if you want to communicate between both CPUs for parallels tasks.

    Two dies on the same core removes that overhead, the speed between them is extremely fast, faster than any other interconnection on a motherboard. It still doesn’t help with the heat management because you still have to lower the clock frequency to the manageable levels. So, two separate lower clocked CPUs via a slower interconnection bus.

    In addition to that, CPU were shrinking, allowing them to put more cores on the same die without increasing the heat and this let people do multiple things at the same time without actually slowing down the overall performance.

    There’s also the hyperthreading that Intel has on most of their CPU, giving the OS twice the virtual *cores*.

    It’s not only about an app that can do multiple work at the same time but more about the ability of running multiple apps with multiple tasks at the same time.

    For a video game, they can use one core to manage the network coding, one core to do math (nowadays, they use GPU to do physics), and so on.

    Not every app have the distinct tasks like a game and that makes it even harder for them to parallelize their code.
  5. Dookieman thread starter macrumors 6502

    Oct 12, 2009
    Wirelessly posted (Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_2_6 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8E200 Safari/6533.18.5)

    Thanks for the information!

Share This Page