Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by ColdZero
Also the opteron has hypertransport links to each other processors memory. The g5 does not share memory like this. If that board had 4dimm slots per processor, a g5 could only access 4gb of memory using 1gb dimms. Where on the opteron each processor could access the whole 16gb of the server through the hypertransport links.

I like the point you make and agree. The pic of the 4-way opteron board I posted earlier looks too large for a 1u inclusure but it was posted to show 1 example of a 4-way opteron system.
 
Originally posted by Sun Baked
The ideal solutions are ones that don't require trained monkeys to accomplish, something that an intern couldn't even screw up.

Water cooling and/or large heat sinks that require a screwdriver and thermal paste are less than ideal.

1. People who build servers know what they are doing, especially mac technicians.
2. Thermal paste is now commonplace on all aftermarket and most stock CPU coolers today.
3. AMD recommend the use of a screwdriver to attach the stock AMD 2500+ heat sink to the CPU and motherboard. (I know because I installed one on the weekend)
4. I have no formal training and have built several PC including my own with no trouble.
5. Any company considering buying expensive servers could probably afford to keep a computer tech on the pay roll.
 
OK, you've just named some things that would be in a market that the XServe isn't aiming for.

Sure it may be powerful enough for a corporate cluster, but it supposed to be simple enough for a novice to use.

Which has been one of the selling points of Mac OS Server for years.

The XServe was the first piece of hardware that brought ease of repair to the novice, before it would have taken that trained Mac tech to do the repair -- or sending the unit out for service.

Now most every part should take less than 10 minutes to replace, including showing the intern how to do it.

And no thermal paste.
 
You make a strong point, but anyone with the "need" for a server is going to have some substansial computing knowlege.

And what do you have against thermal paste. AMD Heatsinks come standard with it because it results in better cooling
 

Attachments

  • thermal paste.jpg
    thermal paste.jpg
    2.5 KB · Views: 544
ColdZero:

If 2 Opterons and everything fit fine in 1U, I fully expect that 4 Opterons and everything can fit in 2U, just not with many disks in there, and with only one or two PCI slots. For example, here's Newisys's quad 4U Opteron:

http://www.newisys.com/products/4300_specifications.html

In that pic, you can see that a lot of volume is lost to the 6 full length PCI slots and the additional half-length PCI slot. The case also holds 5 or 6 hot-swap SCSI disks. I think you'll agree that if they cut this down to perhaps one PCI slot and 3 or 4 disks they'd be in 3U territory at least. I still think 2U, but I don't think much can be settled on that matter.

Also the opteron has hypertransport links to each other processors memory. The g5 does not share memory like this. If that board had 4dimm slots per processor, a g5 could only access 4gb of memory using 1gb dimms. Where on the opteron each processor could access the whole 16gb of the server through the hypertransport links.
I think your a bit confused here. Yes Opterons use hypertransport to link chips to other chips, so that they can share memory. However, a quad G5 would also need to share memory, or else its not a quad, its separate machines. It would do this by either having one mammoth system controller with 4 FSB's and 4 or more channels of RAM, or it would link multiple smaller system controllers together with something like hypertransport.
 
Originally posted by ddtlm
I think your a bit confused here. Yes Opterons use hypertransport to link chips to other chips, so that they can share memory. However, a quad G5 would also need to share memory, or else its not a quad, its separate machines. It would do this by either having one mammoth system controller with 4 FSB's and 4 or more channels of RAM, or it would link multiple smaller system controllers together with something like hypertransport. [/B]

G5s do not share memory. They have seperate interfaces to their individual bank of memory. Thats why you need to install it in pairs (in dual machines) rather than dimm by dimm. Just because they don't share memory doesn't mean they are seperate machines. They each have a FSB link to the chipset, from there they connect to the I/O devices. The dual G5 does this same thing, just look at the diagram on Apple's site. Memory is not shared between processors. All a quad g5 is is two more processors off the chipset. The processors then interface with the memory directly, opteron style. They don't share their memory, you need them in each bank.
 
Originally posted by ColdZero
G5s do not share memory. They have seperate interfaces to their individual bank of memory. Thats why you need to install it in pairs (in dual machines) rather than dimm by dimm. Just because they don't share memory doesn't mean they are seperate machines. They each have a FSB link to the chipset, from there they connect to the I/O devices. The dual G5 does this same thing, just look at the diagram on Apple's site. Memory is not shared between processors. All a quad g5 is is two more processors off the chipset. The processors then interface with the memory directly, opteron style. They don't share their memory, you need them in each bank.
LOL...

The memory is shared.

The memory controller grabs the memory in 128-bit chunks to build the block of memory it's fetching, since each DIMM is 64-bits. It's grabbing memory chunks across two DIMMs at once.

G5 developer notes
 
Originally posted by ColdZero
G5s do not share memory. They have seperate interfaces to their individual bank of memory. Thats why you need to install it in pairs (in dual machines) rather than dimm by dimm. Just because they don't share memory doesn't mean they are seperate machines. They each have a FSB link to the chipset, from there they connect to the I/O devices. The dual G5 does this same thing, just look at the diagram on Apple's site. Memory is not shared between processors. All a quad g5 is is two more processors off the chipset. The processors then interface with the memory directly, opteron style. They don't share their memory, you need them in each bank.

The memory *is* shared for a huge speed boost!
 
Originally posted by ColdZero
G5s do not share memory. They have separate interfaces to their individual bank of memory. That’s why you need to install it in pairs (in dual machines) rather than dimm by dimm.

You might want to read these threads I started "A shift to Serial" and "G5 memory: Dual Channel or not " The G5 runs what is knows as 'Dual-channel' DDR. It was developed by Nvidia for use with the AMD Athlon XP but has seen wide adoption throughout the computing world due to the fall of RAMBUS. Unlike windows based systems (where it is not essential to use both channels,) the G5 must use both channels, hence installing RAM in Pairs.
The DDR memory modules in the Dual 2.0GHz G5 run @ 400MHz, but the dual channel system allows a total of 800MHz, 400MHz per channel. Hence the memory is shared between the 2 processors

The Opetrons however each have their own bank of Dual Channel DDR. (See the pic on page 2 of this post)
 
not quite.

Originally posted by manitoubalck
You might want to read these threads I started "A shift to Serial" and "G5 memory: Dual Channel or not " The G5 runs what is knows as 'Dual-channel' DDR. It was developed by Nvidia for use with the AMD Athlon XP but has seen wide adoption throughout the computing world due to the fall of RAMBUS. Unlike windows based systems (where it is not essential to use both channels,) the G5 must use both channels, hence installing RAM in Pairs.
The DDR memory modules in the Dual 2.0GHz G5 run @ 400MHz, but the dual channel system allows a total of 800MHz, 400MHz per channel. Hence the memory is shared between the 2 processors

The Opetrons however each have their own bank of Dual Channel DDR. (See the pic on page 2 of this post)
You don't quite have that right.
Dual channel wasn't invented by Nvidia. Machines especially server have used dual bank, quad bank, octal bank memory for ages. Pentiums and early PowerPCs needed SIMMs in pairs because a SIMM is 32 bits wide and the memory controller was 64bits wide. Nvidia was first to market with Dual-channel DDR, but that's hardly an 'invention', just a copy of what everyone else has done for every other ram technology used (RAMBUS, SDRAM, EDO, FPM...)
Also dual channel DDR 400 does not cause the system to run at 800MHz, it causes the 400MHz memory to operate as a 128bit wide memory bank. This is very different.
 
Re: not quite.

Originally posted by ffakr
Nvidia was first to market with Dual-channel DDR, but that's hardly an 'invention', just a copy of what everyone else has done for every other ram technology used (RAMBUS, SDRAM, EDO, FPM...)
Also dual channel DDR 400 does not cause the system to run at 800MHz, it causes the 400MHz memory to operate as a 128bit wide memory bank. This is very different.

'This effectively doubles the bandwidth, enabling the Power Mac G5 to reach a maximum memory throughput of up to 6.4GB per second' for www.apple.com

OK, DDR400 opperates @3.2GB/s, while what you said is true about allowing the memory to operate as a 128bit wide bank, the fact that the memory throughput is 6.4BG/s shows that the dual channel system operates at an effective 800MHz
 
Re: Beyond the hardware what do you think xGrid is?

Originally posted by rsnyder@psu.edu
The story focuses on the Xgrid project mailing list that slipped out. Personally, I am wondering if the xGrid has anything to do with Oracle's 10g (as in "Grid" computing). The word at Oracle World was that Oracle will release a full OS X version with the release of 10g. There was also speculation that this would require the G5.

Since both Oracle and now Apple are talking grid computing . . . . would be sweat to run Oracle on OS X now that Sun is in dumper.

I am holding two servers in my budget waiting for the G5 xServes. My goal is in three years to get off Sun and its overpriced hardware. The Apple reps know that there are more than a few like me holding off spending until the G5 xServe arrives.

The xgrid mailing list has alot of drama right now. People are being booted from it, and one guy is posting to it claiming to be a G5XS beta tester.

I just wanted to get some ideas for topologies for existing/any CPU's and that was not forthcoming on that list. The fiolks there are basicly trying to get advance info on product announcements, not adding to the discourse.

Even questions I answered resulted in punishment.

Caution.

Rocketman
 
Re: Re: not quite.

Originally posted by manitoubalck
'This effectively doubles the bandwidth, enabling the Power Mac G5 to reach a maximum memory throughput of up to 6.4GB per second' for www.apple.com

OK, DDR400 opperates @3.2GB/s, while what you said is true about allowing the memory to operate as a 128bit wide bank, the fact that the memory throughput is 6.4BG/s shows that the dual channel system operates at an effective 800MHz

Actually no, it still doesn't run at an effective 800MHz. You can't access dual channel DDR 400 at 800Million times a second. 800MHz would likely lower the latency as there would be more cycles per time unit.
It makes the memory effectively 128bit wide which is very different.

I'm sorry to be a stickler, but this isn't apples and oranges, it's apples and puppies (quite different)
 
1RU or Smaller

Nearly every substantive decision coming from Cupertino these days is marketing based, not engineering based. It must be that way, as Apple is in high gear growth mode, and is playing some of the most aggressive hardball ever seen in the computer industry. My point?

There is zero chance that Apple will walk away from the slow but steady success they have had in the 1RU server space. In fact, if anything is being tinkered with in the Apple R&D labs, other than a 1RU form factor, it would more like be smaller/denser, not larger.

Steve Jobs and Team are motivated these days by the idea of pushing the envelope, and just knocking the socks off of people with any new product released. So, what would do that in the next Xserve release?

I expect at least dual G5s in the next 1RU Xserve. I would just smile to see a quad G5 1RU Xserve announced. Not possible, someone says? Tell that to the guys being paid the big bucks at Apple to work 20-hour days in the lab chasing ideas that are "not possible." Don't bet against them.
 
Re: 1RU or Smaller

Originally posted by MacWhispers
Not possible, someone says? Tell that to the guys being paid the big bucks at Apple to work 20-hour days in the lab chasing ideas that are "not possible." Don't bet against them.

20 hours a day leaves them just 3 hours a day to sleep, accounting for 1 hour commuting to and from home and eating. The human body cannot endure that. A 20 hour workday would be less productive than a shorter workday for that reason.

But, what do you expect from this guy?
 
Re: Re: 1RU or Smaller

Originally posted by Phil Of Mac
20 hours a day leaves them just 3 hours a day to sleep, accounting for 1 hour commuting to and from home and eating. The human body cannot endure that. A 20 hour workday would be less productive than a shorter workday for that reason.

But, what do you expect from this guy?
I think this is just the extension of the old 'Steve Jobs the Tyrant' days of Apple stories.
I guarantee you that people don't make a habit of working 20 hour days at Apple just because SJ is in charge again. I'm sure it happens when there's a deadline though... I've put in a few longer than that (in IT positions).
 
Re: Re: Re: 1RU or Smaller

Originally posted by ffakr
I think this is just the extension of the old 'Steve Jobs the Tyrant' days of Apple stories.
I guarantee you that people don't make a habit of working 20 hour days at Apple just because SJ is in charge again. I'm sure it happens when there's a deadline though... I've put in a few longer than that (in IT positions).

Sure, but not on a regular basis!
 
Originally posted by ffakr
Actually no, it still doesn't run at an effective 800MHz. You can't access dual channel DDR 400 at 800Million times a second. 800MHz would likely lower the latency as there would be more cycles per time unit.
It makes the memory effectively 128bit wide which is very different.

I'm sorry to be a stickler, but this isn't apples and oranges, it's apples and puppies (quite different)

Ok sorry for flogging the dead horse but this is a direct quote from www.apple.com/powermac/architecture.html

"400MHz dual-channel memory
The new Power Mac G5’s memory controller supports fast 400MHz, 128-bit DDR SDRAM, and a dual-channel interface enables main memory to address two banks of SDRAM at a time, reading and writing on both the rising and falling edge of each clock cycle. This effectively doubles the bandwidth, enabling the Power Mac G5 to reach a maximum memory throughput of up to 6.4GB per second"

So in essence were both right, DDR400 has a bandwidth of 3.2GB's (hense its other name DDR3200,) the max memory thoughput is double that (6.4GB/s.) Therefore it is also 128-bit DDR.

I would still like to see an apple chipset that used rambus which which is up to a frequency of 1333MHz, effective thoughput upto 10.7GB/s. (I know that the cost and RAMBUS's inablity to see eye to eye with JDec have been it's downfall.) These specs are taken from www.rambus.com
 
Originally posted by manitoubalck
So in essence were both right, DDR400 has a bandwidth of 3.2GB's (hense its other name DDR3200,) the max memory thoughput is double that (6.4GB/s.) Therefore it is also 128-bit DDR.
I'm still not sure why that makes us both right... just because the other poster came to the same 6.4GB by miss-understanding how the memory controller works. ;-)

I would still like to see an apple chipset that used rambus which which is up to a frequency of 1333MHz, effective thoughput upto 10.7GB/s. (I know that the cost and RAMBUS's inablity to see eye to eye with JDec have been it's downfall.) These specs are taken from www.rambus.com
No one sells (or really makes) RDRAM over 1066MHz right now. Apple would be pushing the envelope if they went 1066MHz, let alone 1333MHz.
Not only this but since Rambus is even more rare these days, it's suffering worse economies of scale. Newegg.com has 512 MB of RDRAM 1066 for $235. The same ammount of DDR 400 runs around $80, 3x less.
At this markup, if you wanted to fill a G5 with 8GB of Mushkin RAM from newegg, it would cost around $1850. At 3x more, it would cost around $5550 to do the same. This is a huge issue.
Newegg doesn't even offer 1GB RIMMs. Given the size of newegg, I'd say this indicates a potential availability problems too.
RAMBUS is being produced in 256Mbit densities while DDR is being made in 512Mbit densities and companies are preparing 1Gbit DDR-II chips. You'd need 4x as many physical chips to get the same amount of memory on RDRAM as you'd need for upcomming DDR-II. Even with economies of scale, RDRAM would never cost as much as DDR unless the manufacturers could make up a lot of lost research time and bridge that density gap.
RDRAM also has a higher latency than DDR. The Opteron performs so well partially because it has an on-die memory controller that lowers the latency of memory access. I'm not sure it's a good thing to give Opteron more of an advantage in this area.
Also, don't forget that RIMMs are 16bit wide. One PC800 RIMM has as much bandwidth as one DDR 400 DIMM. RDRAM 1066 may sound way faster than DDR 400, but 2 RDRAM 1066 RIMMS actually provide the same bandwidth as dual channel DDR 533 (Apple's using dual DDR 400).

RDRAM has one advantage over DDR. It has a higher frequency.
It has several disadvantages: cost, density, latency, availability, width of data pipes.

The other thing I really really hate about rambus is their marketing. They love to talk up unreleased parts and make them sound like they are actually available.
The link off the main page says RDRAM is available from 800MHz to 1600MHz, but i you click on it, you get a press release stating that speeds from 800 to 1200MHz are available with 256Mbit densities. Now go into the real world and try to find anyone selling anything other than 800 and 1066MHz RDRAM.
Look into the site more and you find tons of stories about super high speed RDRAM, overclocked RDRAM, higher density chips... all presented as if this technology were available today. The majority of the site is dedicated to vaporware.

Imagine if you went to Apple's web site and the G5 page boasted about PPC 970s available from 1.6GHz to 3GHz.
 
Originally posted by manitoubalck
And what do you have against thermal paste. AMD Heatsinks come standard with it because it results in better cooling

and it goes great on bagels!
 
Originally posted by CMillerERAU
and it goes great on bagels!
And it'll void your warranty if Apple catches you rubbing your bagels on the heatsink.

Only the Apple Certified Techs are allowed to do that. ;)
 
I one year when all the warrenty's are over it maybe interesting to see what the overclocking potential of the G5 is, or what preformance gains can be aquired by upgrading the cooling solution. Also potential case upgrades with the upcomming BTX form factor(maybe a second optical bay or a 3.1/4" external bay for a zip drive, etc...)

The comment about RAMBUS is is just me wanting to see how the G5 would run on serial RAM. I am well aware of the companies downfalls:)
 
Originally posted by manitoubalck
I one year when all the warrenty's are over it maybe interesting to see what the overclocking potential of the G5 is, or what preformance gains can be aquired by upgrading the cooling solution. Also potential case upgrades with the upcomming BTX form factor(maybe a second optical bay or a 3.1/4" external bay for a zip drive, etc...)

The comment about RAMBUS is is just me wanting to see how the G5 would run on serial RAM. I am well aware of the companies downfalls:)

I'd prefer to see what type of performace we would see with an on-die memory controller and good old DDR. I wonder what types of problems the engineers would run into if they had on-die memory controllers with shared memory in SMP configurations. I'd hate to trade an on board mem controller down the line for the benefits of having one large memory space. :)

Anyone post info on how to adjust the clock speed on these things yet?
 
Originally posted by ffakr
I'd prefer to see what type of performace we would see with an on-die memory controller and good old DDR. I wonder what types of problems the engineers would run into if they had on-die memory controllers.

On die memory controlers already exist, check out the line of AMD 64's and Opterons.
 
manitoubalck:

...and various Sun products had on-die controllers before AMD. (Not sure if anyone beat Sun to it.)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.