Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
I wasn't thinking of the dual QPI lanes, because I'm dumb. Remember that.
:cool:
We all have our off days. :eek: I know I do... :p

Sidewinder; Would you weigh in on the memory speed we might see?
(Not asking in terms of breaking any NDA, but you personal thoughts).
 
Sidewinder; Would you weigh in on the memory speed we might see?
(Not asking in terms of breaking any NDA, but you personal thoughts).
To be honest, I haven't given it much thought. I find all this speculation to be interesting, but not interesting enough to invest time in it myself. Apple is going to release what they are going to release. I am content to wait and see. My current Mac Pro is more than fast enough.

I am much intrigued by Snow Leopard!

S-
 
:cool:
We all have our off days. :eek: I know I do... :p

Sidewinder; Would you weigh in on the memory speed we might see?
(Not asking in terms of breaking any NDA, but you personal thoughts).

If it's less than 1333 or if you want more than the default configuration, I'm guessing it makes most sense to buy the least memory you can from Apple and replace it all with more higher performing memory.

The other factor is the default memory timings... hopefully they don't choose totally slack timings.
 
If it's less than 1333 or if you want more than the default configuration, I'm guessing it makes most sense to buy the least memory you can from Apple and replace it all with more higher performing memory.

The other factor is the default memory timings... hopefully they don't choose totally slack timings.

If the bus isn't 1333mhz, you can put in any speed you want and it'll run at 1066. Timing wise they'll use the JEDEC standard, like usual. 7-7-7-20 for 1066, 7-7-7-24 for 1333.
 
umm, Nice Thread.

I think Apple and nvidia are working closer together since a few months. You can see that on the newest MacBook Series. They got the new nvidia technology and the newest nVidia Notebook chip in it. So, I am pretty shure, that they will do something special diffrent with the early new mac pro.
 
umm, Nice Thread.

I think Apple and nvidia are working closer together since a few months. You can see that on the newest MacBook Series. They got the new nvidia technology and the newest nVidia Notebook chip in it. So, I am pretty shure, that they will do something special diffrent with the early new mac pro.

What, might I ask?

They're not going to use an nVidia chipset and add an integrated GPU, you know. :rolleyes:

(Uh, guys who know about Tylersburg... back me up here... there isn't a nVidia board that supports Gainestown, is there?)
 
What, might I ask?

They're not going to use an nVidia chipset and add an integrated GPU, you know. :rolleyes:


No of course not. For example a GTX 280 supporting CUDA.

Standart: ATI 4XXX
High-End: GTX 2XX (280?)
Professional: FX 5800

edit: OR as I said special. Don't know what, but something special ;-)...
 
If the bus isn't 1333mhz, you can put in any speed you want and it'll run at 1066. Timing wise they'll use the JEDEC standard, like usual. 7-7-7-20 for 1066, 7-7-7-24 for 1333.
Not a bad idea anyway, as 3rd party has been known to be the cheaper way to upgrade. ;)

As far as timings, CL = 7 for Unbuffered ECC, and CL = 9 (9-9-9-24) for Registered (if even offered) will be common, and the most likely used. It will still be rather speedy. ;) Of course, if you want better, and it exists, you could opt to pay $$$ for the fastest you can find. :D

What, might I ask?

They're not going to use an nVidia chipset and add an integrated GPU, you know. :rolleyes:

(Uh, guys who know about Tylersburg... back me up here... there isn't a nVidia board that supports Gainestown, is there?)
Yes there's a licensing method now, but some board makers may opt to use an nVidia chip (likely still have them on hand, and want to use them first). The Asus P6T6 WS Revolution is currently using the N200 chip. ("Nvidia® nForce 200" from Asus's site).
Sure isn't. In fact Intel is suing Nvidia over QPI licensing. So who knows what is going to happen...
I'm not sure what will happen over this. :confused: Blow over? Blow up?!? :eek: :p

Anyone else care to weigh in here? :D
 
Yes there's a licensing method now, but some board makers may opt to use an nVidia chip (likely still have them on hand, and want to use them first). The Asus P6T6 WS Revolution is currently using the N200 chip. ("Nvidia® nForce 200" from Asus's site).

The N200 is on all x58 boards, but use requires certification by Nividia. Those who don't have their boards certified (Like intel) have it disabled.
 
The N200 is on all x58 boards, but use requires certification by Nividia. Those who don't have their boards certified (Like intel) have it disabled.

Incorrect.

nF200 is a bridge chip that nVidia would like board makers to use; but they don't have to. (And, indeed, many do not.)

I can conclusively say that Intel's DX58SO board does *NOT* have an nF200 chip onboard. nVidia opened up licensing of SLI to 'bare X58' boards, which is what Intel's board is.

X58 supports 36 lanes of PCI Express 2.0. The most common configuration is two x16 slots, and one x4 slot. Some manufacturers make boards with a PCI Express switch that has more than two physical x16 slots, where if you plug in a card into the third slot, it makes two of the slots x8. You could even design a board that did that for four slots by utilizing two such switches. (And four x16/x8 slots.)

Most manufacturers that have four-slot boards do use the nF200 chip, though. It takes a single PCI-e x16 slot, and bridges it into two PCI-e x16 slots. This allows the two cards on the bridge to communicate with *EACH OTHER* at full x16 2.0 speeds; but the two cards *COMBINED* share the single x16 link to the X58 chipset. Net effect: inter-CARD communication is full speed, but card-to-chipset communication is the same as if they were on a non-nF200 x8 slot via switch.

As for memory speed, Intel's official standard for the publicly-released Core i7 is that its onboard memory controller supports 1066 MHz DDR-3. But, board manufacturers are free to declare support for faster memory, it's then up to the motherboard maker to provide support at speeds beyong 1066. Up to 1333 MHz on the non-extreme parts, and "the sky's the limit" on the extreme proc. Indeed, Intel's own DX58SO claims official support for 1600 MHz memory on an extreme CPU, and the board provides support at 1866 MHz, although that is considered to be overclocking.

I'm happily running my board with 1333 MHz RAM at 1600 MHz.
 
You can't overclock intel macs save for a few pieces of software, and everyone knows that going from 5-5-5-15 to 4-4-4-12 will be almost unnoticeable. Without the ability to tighten the timings yourself, I don't see why someone would bother to get memory that's slightly better than standard.
 
You can't overclock intel macs save for a few pieces of software, and everyone knows that going from 5-5-5-15 to 4-4-4-12 will be almost unnoticeable. Without the ability to tighten the timings yourself, I don't see why someone would bother to get memory that's slightly better than standard.

No one that is buying a Mac, that is for sure. With other platforms the better memory should allow for overclocking. How else does anyone think they got the i7 920's running at 965 speeds? ;)
 
To be quite honest, I didn't know what you meant. You said the "holes are too small to pull enough air thru" and that is all I had to go on.

Looking at the image, it is impossible to tell if the ratio of open area to closed area is less, the same, or greater. We would need to see a closeup of the pattern to know for sure.

S-

Well then don't comment. Looking at your other posts, all you do is poo poo what people have to say. You don't add much to the conversation.
 
Well then don't comment. Looking at your other posts, all you do is poo poo what people have to say. You don't add much to the conversation.
You don't get to tell me what I can and can't do.

My only comment was that hole size is not a big factor in air flow. It the ratio of open space to closed space in the given area. If you don't like it, say what you actually mean to say next time....

S-
 
Well then don't comment. Looking at your other posts, all you do is poo poo what people have to say. You don't add much to the conversation.

You don't get to tell me what I can and can't do.

My only comment was that hole size is not a big factor in air flow. It the ratio of open space to closed space in the given area. If you don't like it, say what you actually mean to say next time....

S-

Keep it in PMs, please.

Or, actually, don't fight at all. :)
 
No one that is buying a Mac, that is for sure. With other platforms the better memory should allow for overclocking. How else does anyone think they got the i7 920's running at 965 speeds? ;)

That's true, but if you're settling for 3.2Ghz on a 920, I feel bad for you :p
 
Hai guyz:

I just realized something fairly important today. As you know, Leopard is not 64bit. It instead uses Physical Address Extension to pull off 4GB+ RAM use. I had recalled reading once that this effectively gives you 36bit memory access, at least as far as total RAM goes. Which begs the question, how much RAM is that?

A quick check of Wiki reveals a problem:

http://en.wikipedia.org/wiki/Physical_address_extension

The answer is 64GB. So 96GB is not happening at launch unless it is released with Snow Leopard (possible). Leopard cannot address it.
 
The answer is 64GB. So 96GB is not happening at launch unless it is released with Snow Leopard (possible). Leopard cannot address it.

Another reason for it not to come with Leopard. Aside from the obvious factor that 10.5 is a proven OS, is there any reason for Apple to ship the new Mac Pros with it? Being that Apple seem to feel their latest OS incarnations are the best versions anyway.

Also can someone point me to any discussion on an expected release date (or post it here) of 10.6?
 
Hai guyz:

I just realized something fairly important today. As you know, Leopard is not 64bit. It instead uses Physical Address Extension to pull off 4GB+ RAM use. I had recalled reading once that this effectively gives you 36bit memory access, at least as far as total RAM goes. Which begs the question, how much RAM is that?

A quick check of Wiki reveals a problem:

http://en.wikipedia.org/wiki/Physical_address_extension

The answer is 64GB. So 96GB is not happening at launch unless it is released with Snow Leopard (possible). Leopard cannot address it.

The 36-bit memory address space gives you:

2^36 = 68,719,476,736 bytes = 67,108,864,000 KB = 65,536 MB = 64 GB

Now, if Snow Leopard is ready when the new Mac Pro is ready to be released, it doesn't make a lot of sense, to me anyway, to go through the effort and expense to fully test and certify Leopard on the new Mac Pro. My guess is that Snow Leapord will ship after the new Mac Pro.

I don't see this as big problem because I don't think anyone is going to want to buy the RAM required to get to 96GB. Are there applications today that need more than 64GB of RAM? Will there be this summer? I don't think so....

S-
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.