is the core solo chip really a core duo, and is there anyway to 'reactivate' it?

Discussion in 'Mac Pro' started by Raukodur, Mar 9, 2006.

  1. Raukodur macrumors newbie

    Joined:
    Feb 28, 2006
    #1
  2. After G macrumors 68000

    After G

    Joined:
    Aug 27, 2003
    Location:
    California
    #2
    Might be a faulty second core. It saves Intel money to rebrand Core Duos that don't have the second core working as Core Solos.

    I don't think you'd want to activate a core if it's faulty.
     
  3. mdavey macrumors 6502a

    mdavey

    Joined:
    Nov 1, 2005
    #3
    Yes, this is correct. It is both faulty and disabled.

    No. It is disabled by passing a current through the faulty core in a specific way which causes the internal fuse links to melt, permanently disconnecting it from the chips' power supply circuitry.
     
  4. Raukodur thread starter macrumors newbie

    Joined:
    Feb 28, 2006
    #4
    is this something you are absolutely sure about?

    How would you find something like that out?
     
  5. flir67 macrumors 6502

    Joined:
    Jun 23, 2005
    #5
    no lead pencil trick for intel :)

    just thinking back to the days of the amd duron and the lead pencil trick.

    good old days..
     
  6. Morn macrumors 6502

    Joined:
    Oct 26, 2005
    #6
    I wonder how many of them actually have faulty cores..... Or they need to kill working cores to make up the numbers necessary :p
     
  7. generik macrumors 601

    generik

    Joined:
    Aug 5, 2005
    Location:
    Minitrue
    #7
    Well you are in no position to ask, you just have to take it at face value :)

    Nothing irritates me more than someone asking a question, then firing a barrage of "are you sure"s when offered a reply.
     
  8. fisha macrumors regular

    Joined:
    Mar 10, 2006
    #8
    ( long time reader - first ime poster )

    Generik,

    I think its a perfectly reasonale thing to ask in this case. Many of the other threads on this topic i've read so far elude to no disabling approach similar to mdavey's authoritively put forward explanation.

    I wont argue that fusing certain links is a possible method of disablement. What I will ask in reply ( as mdavery seems to be so knowledgable in this ) :


    If the original chip is a faultly/disabled core duo off the same production line, then surely its a 1.67Ghz chip. What gets changed to make it a 1.5Ghz?

    Going by the posts of upgrading to higher speed core duos in the mini, there seems to be nothing changed on the motherboard jumpers to change the speed CPU's clock multiplier.

    If so, then that would imply ( to me ) that the clock multiplier is set at the production stage inside the chip die. How would they change it from 1.67 to 1.5? :confused: because surely fusing stuff to the point of damaging silicon on a work half of the chip die is liekly to damaged the working side too???
     
  9. Raukodur thread starter macrumors newbie

    Joined:
    Feb 28, 2006
    #9
    i wouldnt have asked the second question if he hadnt answered the first question so authoritatively, if he had said i believe this is the case, than i would have left it at that. When he seems so sure it makes me wonder were he got that knowledge from.

    And the reason why i asked the question in the firt place was not to get a sinple yes no answer, but to find out whats going on in a bit more detail.

    I am surprised that more people arent interested in this. If there is some way to reactivate the second core, if it isnt faulty and if there is one, than surely thatd be amazing, both minis would have a dual core processor. Don't see why anyone would buy the more expensive model then, which leads me to believe there cannot be a way to reactivate the second core, if it is there, since once someone found out how to reactivate it, apple sales for the higher end model would go down

    Hmm, maybe its a clever marketing ploy, since the number of sales of the then cheaper dual core machine would outweigh the loss of the decreased sales for the more expensive dual core mini.
     
  10. Krevnik macrumors 68030

    Krevnik

    Joined:
    Sep 8, 2003
    #10
    Even fixed multipliers tend to be set by one of the resistors on the chip package. The chip is usually tested during the packaging step to see what speeds are stable. The 1.5Ghz procs could simply be 1.67s that didn't actually reach 1.67s.

    Correct, it is likely a resistor or other component on the package itself.

    If the company set the clock multiplier on silicon, they would have all sorts of problems and even lower yields. Fusing stuff on the die is possible, especially if the chip is designed for it. Modular chip designs could very well have inputs designed to allow them to disable cores, but aren't exposed by the chip package.
     
  11. Agent69 macrumors member

    Agent69

    Joined:
    Sep 22, 2005
    Location:
    United States
    #11
    After G is probably right, considering that this has happened before with Intel, with the old 486DX/486SX. (You can read more about this at Wikipedia.)
     
  12. mdavey macrumors 6502a

    mdavey

    Joined:
    Nov 1, 2005
    #12
    It is not something I am absolutely sure about for the Intel Core specifically. I don't work for Intel (I have worked for two companies that manufactured semiconductors and a hardware company very closely involved with a semiconductor manufacturer) and doubt that Intel have made a public statement about exactly how they disable the second core. Also, I doubt that any Intel employee that did know for certain would be permitted to comment.

    I do know that it is very common industry practice to disable capacity in all kinds of integrated circuits in this manner. Devices are manufacturered on silicon wafers - repeating the device across the wafers' surface many times.

    Although the silicon is extremely pure, it may still contain atomic imperfections. Additionally, each stage of processing and manufacture can introduce new imperfections and contamination. It is common for the wafer (or the devices on the wafer) to be tested between processing steps. Because each processing step is very expensive, it sometimes makes financial sense to scrap an individual wafer rather than finishing the manufacture.

    Once processing is complete, each device is electronically tested and characterised. Due to the nature of the manufacturing process, it often isn't as simple as a pass/fail.

    For instance, poorly manufacturered capacitors inside the device can 'leak' current, causing higher-than-expected current consumption of the device (it could be a single capacitor that leaks badly, or thousands of capacitors that each leak a little more than they should). Conversely, if the manufacturer is lucky (or gets very good at controlling yield), they could actually end up with a device that has a lower power consumption than they specified (the manufacturer will design-in n allowance for some sub-optimal performance). It isn't just the capactors inside the IC that might be mis-manufactured - other components can be affected, such as a resistor whose resistance is higher or lower than intended.

    If the distribution and nature of the failed components happens in a certain way, major functionality might be affected. Memory is the highest-density and a fairly complex structure to lay out in silicon. As well as its use in discrete RAM chips such as you find on an SDRAM memory stick, memory structures appear in flash devices, processors and many other ICs.

    Because memory structures are at particular risk from processing flaws (due to its high density), they are often duplicated in expensive devices such as CPUs. This lets a manufacturer choose which bank to disable after manufacture (if both banks are okay, the manufacturer will choose the one with the higher power consumption or choose randomly). Some manufacturer realised early on that by laying out the memory slightly differently they can actually allow the CPU to access and use both banks if they both work and only one bank if one is faulty. This is the main reason why many CPUs are now available with different cache sizes (where one size happens to be double that of the other).

    Out-of-tolerance components in the integrated circuit don't just affect the power consumption of the device, they also affect the heat output (heat generated is a function of the power consumption), timing and switching speed of components like transistors. The last two together are the primary reason why CPUs might not work reliably at higher clock speeds, but are able to when the clock speed is reduced.

    If you have read this far, you probably have a new insight into Intel's CPU business model. Once you are designing in an extra copy of the memory for a CPU in case you have to disable half of it, it is only a small step to do the same for a whole second core. Intel`s Yonah product range is actually just a single device. Once they have characterised each device on the wafer, they decide what speed, power consumption and memory cache and number of cores each particular device is to have and permanently disconnect the unused parts of the CPU.

    If you want to learn more, Wikipedia is a good place to start.
     
  13. gnasher729 macrumors P6

    gnasher729

    Joined:
    Nov 25, 2005
    #13
    Modifying a chip after production is a very common practice.

    The most common situation is with L2 caches. The Core Duo, for example, has 2 Megabyte of L2 cache. That is an awful lot of transistors, and producing all of them correctly is quite difficult.

    So what Intel (and every other manufacturer does): They build a chip with slightly more than two Megabyte of L2 cache, lets say 2080 Kilobyte instead of 2048 Kilobyte. Then the chip is tested; any parts of the L2 cache that don't work are turned off, so that you end up with two MB working L2 cache, even if bits of the L2 cache are broken.

    Another example is the Cell processor, which is manufactured with 8 processing elements, but ships with 7 processing elements only active. If you assume that lets say 10 percent of all processing elements are broken when manufactured, getting a chip with 8 working processing elements would be rare, but there would be plenty with 7 working processing elements.
     
  14. excalibur313 macrumors 6502a

    excalibur313

    Joined:
    Jun 7, 2003
    Location:
    Cambridge, MA
    #14
    How is a core solo different from a pentium m? I thought all they did with core duo was shrink down two pentium m's and made them work well together...
     
  15. chaos86 macrumors 65816

    chaos86

    Joined:
    Sep 11, 2003
    Location:
    127.0.0.1
    #15
    thats a cool idea huh?

    for years, low ratios of working chips to non-working ones in a batch has been a very big (and expensive) issue. now, unless both processors are dead theres no loss. i always wondered what the big deal was about dual cores, now i see how cool the tech is.
     
  16. Anonymous Freak macrumors 601

    Anonymous Freak

    Joined:
    Dec 12, 2002
    Location:
    Cascadia
    #16
    Core Solo/Duo added SSE3 instructions, Virtualization, and... One other thing I'm forgetting, sorry.

    In addition, Core Duo was designed from the ground up to be a dual-core processor. It isn't just two cores slapped together on one die, which is essentially what the desktop Pentium D is. In Core Duo, the two 'cores' share some logic curcuitry that is duplicated in the Pentium D, and the two cores share a single 2 MB L2 cache. Pentium-M had a 1 MB L2 cache, so while it would be apparent that Core Duo would just be two of those tacked together, it's not.

    In the desktop Pentium D, which is essentially two Pentium 4 processors slapped in one package, the separate cores have separate L2 caches, which means that one processor cannot know what is in the other processor's L2 cache. If core 2 wants a bit of info that happens to be in core 1's cache, core 2 has to go all the way to main memory to fetch it. This means that the L2 caches of both processors could have lots of duplicate information in them, depending on what you're doing. Also, on Pentium D, the two cores talk to each other using the main 800 MHz front side bus, just like two Intel processors in two separate sockets would.

    In Core Duo, though, the L2 cache is shared, which means that if core 2 wants info that core 1 put in cache, it can get it. Also, the two cores talk to each other 'off the bus' through their own internal bus, which doesn't clog up the front side bus with inter-processor communication.

    Which gets us to a Core Solo. A Core Solo is just a Core Duo with one core disabled. Whether its because one core was dead, or because the whole package couldn't run at the 1.66GHz minimum advertised speed for Core Duo, I don't know. Intel has even been known to 'downgrade' a processor simply for supply and demand reasons. (i.e. Some Celerons became so popular that they took what should have been a perfectly good Pentium and 'downgraded' it to Celeron status.)

    (It's also odd, because Apple is selling a 1.5 GHz Core Solo, and Intel still doesn't list a 1.5 GHz Solo, only a 1.66 GHz Solo. (They list a low-power 1.5 GHz Core Duo, though.)
     
  17. excalibur313 macrumors 6502a

    excalibur313

    Joined:
    Jun 7, 2003
    Location:
    Cambridge, MA
    #17
    That is a very good point. Could that be a supply and demand issue too?
     
  18. Anonymous Freak macrumors 601

    Anonymous Freak

    Joined:
    Dec 12, 2002
    Location:
    Cascadia
    #18
    No, I have a feeling that it's just Apple under-clocking the official 1.66 GHz part to avoid being too similar to the Core Duo model. Unfortunately, the disassembly photos on the 'net aren't high enough quality to tell for certain, and the Core Solo -> 2.16 Core Duo upgrade page doesn't show the processors, or list the exact specs of the Solo.
     
  19. munkees macrumors 65816

    munkees

    Joined:
    Sep 3, 2005
    Location:
    Pacific Northwest
    #19
    Intel for years has the clocking done in chip.

    When intel make its cheaps many chips are produced on one wafer, to save money, they test the chips on the wafer the ones that fail then get tested for other aspects. this how you end up with different clock speeds. the slower one were poorly made.

    Intel did this business model on the 486, where dx33 where 66 that failed and the sx25 was where it failed speed and or fpu. This gives them more yield per waffer, increase product profits.

    Source of my information, comes from an intel engineer I was work on a joint project with. (puting intel custom 486 in cell phones back in 95)
     
  20. SC68Cal macrumors 68000

    Joined:
    Feb 23, 2006

Share This Page