Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
One, two, three... Just give us a date!

I'm not sure right now if I care how many GPUs it has, I'd just like to replace the word "forthcoming" with a real date! Even if it is months out... :)
 
Hi guys! :D


I'm gonna buy this new macbook pro. I really dont give a F_U*C_(K) about what's going to be included in the update, I just dont want to buy a macbook now when I know there's something new/better in just a few days.

So? You think it's going to be released the 23rd like several has mentioned?

Hi, exactly same feelings here. I just can't imagine they wouldn't release it by the end of February.

Cheers
 
My Thoughts Exactly

I need a new MBP, but not so urgently that I'm going to risk buying one days or even just a few weeks before the new ones are announced. This one had better hold together!

Of course the fastest way to get the new ones here would be for me to order an existing model. That'd trigger an update for sure... :rolleyes:
 
Dont you guys think that if apple is still in the planning stage the new MBP is still a way off? i mean im all for getting a new model now aswell, theres no way im buying the core2 duo and having the i7 come out the very next week/month but with all this speculation i dont think half of you are actually reading into it enough.

for example;

02/19: Apple Planning Smoother Transitions Between Graphics Processors in Upcoming MacBook Pros?

this could mean there still fighting with intel as to who gets the graphics. Please correct me if im wrong just adding my thoughts.:D
 
Dont you guys think that if apple is still in the planning stage the new MBP is still a way off? i mean im all for getting a new model now aswell, theres no way im buying the core2 duo and having the i7 come out the very next week/month but with all this speculation i dont think half of you are actually reading into it enough.

for example;

02/19: Apple Planning Smoother Transitions Between Graphics Processors in Upcoming MacBook Pros?

this could mean there still fighting with intel as to who gets the graphics. Please correct me if im wrong just adding my thoughts.:D

Think you're putting a little too much faith in the headline. Arn has already admitted he uses misrepresentative headlines if they're sufficiently "catchy."
 
Think you're putting a little too much faith in the headline. Arn has already admitted he uses misrepresentative headlines if they're sufficiently "catchy."

ok cheers i thought that might be the case but i remember there was talk about apple wanting to skip the arrandale to make their own chip! is that still a possibility or was that just speculation again. i really am getting annoyed i just want a new MBP to rub it in my mates face who got a cor2duo last week :D
 
Dont you guys think that if apple is still in the planning stage the new MBP is still a way off? i mean im all for getting a new model now aswell, theres no way im buying the core2 duo and having the i7 come out the very next week/month but with all this speculation i dont think half of you are actually reading into it enough.

Oh, yes, I agree entirely. But I am still holding off on a purchase until this one starts to fall apart on me, since a lot of the general feel is that there will at least be an announcement soon, even if the product doesn't ship right away.
 
ok cheers i thought that might be the case but i remember there was talk about apple wanting to skip the arrandale to make their own chip! is that still a possibility or was that just speculation again. i really am getting annoyed i just want a new MBP to rub it in my mates face who got a cor2duo last week :D

They weren't happy with arrandale, but they weren't going to make their own chip - they more likely were demanding Intel make them a chip without the second die, and with the memory controller on-board. They may even have succeeded in convincing intel to do so, which could be what the delay is about.
 
They weren't happy with arrandale, but they weren't going to make their own chip - they more likely were demanding Intel make them a chip without the second die, and with the memory controller on-board. They may even have succeeded in convincing intel to do so, which could be what the delay is about.

But wouldn't it be cheaper for intel just to disable the GPU at the Microcode level and not have to deal with polishing a new arch?
 
But wouldn't it be cheaper for intel just to disable the GPU at the Microcode level and not have to deal with polishing a new arch?

Of course (though it's not clear it can be done at the microcode level - more likely they just need to blow fuses). But Apple might not be thrilled to pay the penalty for an off-chip memory controller.

Anyway, it's all just speculation. I'm an expert on AMD's chips, not Intel's, so I'd be guessing.
 
Of course (though it's not clear it can be done at the microcode level - more likely they just need to blow fuses). But Apple might not be thrilled to pay the penalty for an off-chip memory controller.

Anyway, it's all just speculation. I'm an expert on AMD's chips, not Intel's, so I'd be guessing.

For the fusion tech, the GPU is part of the CPU structure isn't it?

If it is, despite the CPU part being less "powerful" AMD's solution would be superior.
 
Nah. Without me, AMD is doomed. :)

Well that certainly seems the case with the Phenom tech.

Oh, this might be a stupid ass question but, I always wondered. How are CPU archs actually made. Is it first done in normal components like GPUs then shrunk onto silicon? Or is it done by some funky supercomputer?

(I would post a picture but AMD and nVidia supposedly asked to remove all pictures of the engineering sample fermi.
 
Well that certainly seems the case with the Phenom tech.

Oh, this might be a stupid ass question but, I always wondered. How are CPU archs actually made. Is it first done in normal components like GPUs then shrunk onto silicon? Or is it done by some funky supercomputer?

(I would post a picture but AMD and nVidia supposedly asked to remove all pictures of the engineering sample fermi.

Not entirely sure I understand the question.
 
Like, when they're building a new CPU. How is the first concept iteration? Is it pieced together transistor by transistor or simulated by some supercomputer like nVidia?

Ah.

Well, using Opteron as an example - a bunch of us came up with the concept on a napkin at a fancy french restaurant. Then we write a behavioral simulator in a language like Verilog. This simulates things at a very high level - for example, to simulate an adder we essentially just say "A=B+C." In Opteron's case I wrote some of the verilog to convert things to 64-bit, in the process inventing some of the instruction set.

Then, verilog in hand, we start breaking things into blocks, and figure out more or less what types of circuits would be in each block, and where, physically, the blocks would be on the chip and how big they would likely be. There are two types of blocks - macroblocks and standard cell blocks. Macroblocks are designed transistor-by-transistor. Standard cell block are designed at a slightly higher level, using gates (like "NAND," "NOR," etc.). The designers manually translate the verilog into gates and/or transistors (at AMD, at least. Most places, like nVidia, use a synthesis tool to automatically do this, but the results suck). We manually position the gates, and draw the transistors in the macroblocks.

We then use "formal verification" to make sure that the resulting logic is mathematically identical to the behavioral verilog.

The behavioral verilog, by the way, is executed against a large suite of test patterns using a large collection of x86 computers (many thousand machines, including all of our desktop workstations plus rack machines) 24 hours a day, and any anomalies generate an alert
 
Forgive me if this has been asked/discussed before, but what is the purpose of the faster graphics card in the current MBP's anyway? I seem to recall seeing speed comparisons for various tasks done by the 9400 and 9600 and the latter was only a few percent faster. When I switch between them I notice absolutely zero difference in running graphics intensive applications. The only difference is that the 9600 makes the computer hotter and louder and uses more electricity.

Apologies in advance for my ignorance...
 
Forgive me if this has been asked/discussed before, but what is the purpose of the faster graphics card in the current MBP's anyway? I seem to recall seeing speed comparisons for various tasks done by the 9400 and 9600 and the latter was only a few percent faster. When I switch between them I notice absolutely zero difference in running graphics intensive applications. The only difference is that the 9600 makes the computer hotter and louder and uses more electricity.

Apologies in advance for my ignorance...

The 9600 is much faster than the 9400 in 3-D applications like games.
 
Ah.

Well, using Opteron as an example - a bunch of us came up with the concept on a napkin at a fancy french restaurant. Then we write a behavioral simulator in a language like Verilog. This simulates things at a very high level - for example, to simulate an adder we essentially just say "A=B+C." In Opteron's case I wrote some of the verilog to convert things to 64-bit, in the process inventing some of the instruction set.

Then, verilog in hand, we start breaking things into blocks, and figure out more or less what types of circuits would be in each block, and where, physically, the blocks would be on the chip and how big they would likely be. There are two types of blocks - macroblocks and standard cell blocks. Macroblocks are designed transistor-by-transistor. Standard cell block are designed at a slightly higher level, using gates (like "NAND," "NOR," etc.). The designers manually translate the verilog into gates and/or transistors (at AMD, at least. Most places, like nVidia, use a synthesis tool to automatically do this, but the results suck). We manually position the gates, and draw the transistors in the macroblocks.

We then use "formal verification" to make sure that the resulting logic is mathematically identical to the behavioral verilog.

The behavioral verilog, by the way, is executed against a large suite of test patterns using a large collection of x86 computers (many thousand machines, including all of our desktop workstations plus rack machines) 24 hours a day, and any anomalies generate an alert

Everything important is written on a napkin you know.
 
Programs like 3d Studio Max or Maya will see performance advanteges. Along with applications that utilize open CL.

How much does a 9600 cost?

I'm asking because I always thought they should have offered it as an extra, not as mandatory ingredient of the MBP's, especially given that switching between the cards is not straightforward. I never use my laptop for gaming or running the applications you mention, so all the 9600 does for me is adding a little extra weight.

I bet that at least 50% of users don't need their 9600, and perhaps the number is as high as 90%. All these users would have happily saved their money and not gotten the extra card, even if it were only a $20 difference.

The idea of standard applications using GPU has been around for years, but this wishful sort of thinking doesn't justify building years of laptops with graphics cards in them that nobody uses, and everybody will have replaced by the time such applications become mainstream.

Which brings me to the second question (they first being the price of the card): why did Apple include these cards in the basic configuration, instead of as an option? Makes no sense to me, in fact seems the greatest mismanagement in the history of Apple building and selling computer hardware.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.