Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
edesignuk said:
No advantage what so ever. Do not buy your RAM from Apple. It's a rip-off!

I hear and understand where you are coming from. My thought on the subject is out of son trepidation. I'm fearful of doing any damage to such a fine machine. Gives comfort to have an Apple technician install and know that it has been done correctly. If I were to mess up, then wouldn't it be my problem!
 
machinehead said:
Good point. Even if the performance of the 90nm 2.0 GHz G5 chips is the same, the wattage is said to be about two-thirds of the older 130nm chips. That would a big plus for reducing the room heating effect, which is quite noticeable in the summer.

We're talking about perhaps 60 watts less heat output from a pair of the new chips, vs. the old ones. Which might allow the fans to run slower or less often, also.

Several press articles have said that the just-announced dual 1.8 GHz and dual 2.0 GHz G5's are still 130 nm chips, contradicting Apple's white paper which says that all G5's now have 90nm chips. Someone claimed to have been told the same thing in a phone call to Apple.

I guess we'll find out in a couple of weeks when the new machines start getting delivered. But it would be nice to have official clarification in the meantime.

Where does it say that? Just out of curiousity, I have yet to see the real evidence for myself =)
 
wdlove said:
I hear and understand where you are coming from. My thought on the subject is out of son trepidation. I'm fearful of doing any damage to such a fine machine. Gives comfort to have an Apple technician install and know that it has been done correctly. If I were to mess up, then wouldn't it be my problem!

Installing RAM is cake. It's not worth the extra hundred dollars or more to pay a guy to give you Apple RAM. I am sure you will install it just fine ;)
 
Well I went to the Apple store last evening, CF Card in hand hoping to put Photoshop through its paces on the new 1.8 machine, but they only had one of the new 1.8's out, and it wasn't loaded with Photoshop.

Maybe someone can answer my question from experience here...

My main gripe with my G4 is the time it takes to open RAW files in File Browser (10 seconds or so each) and the time it takes to open a wad (10-15) large 25-30 meg tiffs.

Even though I have 2 gigs of RAM in my G4, I would assume the FSB is it's main handicap. I'm wondering if I will truly see any speed increase in dealing with these files (is file opening heavily FSB dependent, or is it more RAM and HDD depentdent).

Any experience from folks out there? I'd rather not plunk out 2 grand (after educational discount no less) if I won't see at least a 50% increase in the speed of my workflow.

Thanks!
 
MrSugar said:
Where does it say that? Just out of curiousity, I have yet to see the real evidence for myself =)

On the PowerMac page at apple.com, three pdf documents are linked about half way down the right-hand column, under the heading "Performance and Technical Data."

In the document titled "PowerPC G5 White Paper," it says on page 13 that "IBM uses a 90-nanometer process with more than 58 million transistors and ten layers of copper interconnects."

Now, that statement could be just marketing boilerplate lifted from one of IBM's glossy brochures by an Apple intern majoring in English lit. It's not spelled out in a detailed technical spec that the 90nm chips are used in all three new G5 models, in a manner that would give confidence to geeks, gearheads and curmudgeons. It implies it, though, since no mention is made of the older 130nm process.

Meanwhile, an IT world article said yesterday that the older (130 nm) chips are used in the Rev. B dual 1.8 and dual 2.0 GHz G5's.

You can see why people are confused ... :confused:
 
machinehead said:
Meanwhile, an IT world article said yesterday that the older (130 nm) chips are used in the Rev. B dual 1.8 and dual 2.0 GHz G5's.

You can see why people are confused ... :confused:

The question remains where did the writer of the article get his info. This is from the same news service that "broke" the story about Apple ordering 60GB drives from Toshiba without a single direct quote from Toshiba about it. Second, the story was written by some one in the Boston bureau so he either called Cupertino for info or simple walked over to the Applestore in Cambridge but either way see does not name a source for his info he just states it as fact.

I must admit that know very little about IDG news service, and I work in the media, but what I have read by them lately I am not very impressed with.
 
Soire said:
Just ordered a dual 2.5Ghz G5. The order is completed. :D

I was charged $150 in tax, even though the shipping address is Washington DC, is it the shipping or billing address that they decide whether or not to tax? Anybody know?


Anyways, I've been reading this site hardcore for the past three months. I was hoping for the 3 Ghz, but hey, I guess I'll take what I can get. ;)


Don't worry at all about not having a 3 GHz - you won't be disappointed in the least! :cool:
 
wdlove said:
I hear and understand where you are coming from. My thought on the subject is out of son trepidation. I'm fearful of doing any damage to such a fine machine. Gives comfort to have an Apple technician install and know that it has been done correctly. If I were to mess up, then wouldn't it be my problem!

I've installed RAM on every Mac I've owned (IIci, G3 PB, 17" iMac, G5). I have never - not once - had a single problem. The only real caveat is to have clean hands and to ground yourself (in the electrical, as opposed to spiritual, sense).

Also, you could buy cheaper RAM and then pay a technician to install it for you.

Or, if you ever can get north of Boston (Burlington or thereabouts?), I'd happily install it for you.
 
Never had a problem installing my own RAM -

- Stay grounded to the case.

- Leaved the macine plugged in with teh machine off.

- Handle the RAM stick carefully.
 
stuBCN75 said:
wish i could afford to get 2 monitors !! wish full thinking :)

anyway. the dual 2.0 with the better gfx card and 1GB sounds like a good set up to me. i cant quite stretch my budget to the 2.5. i think the dual 2.0 will be quick enough for me.

i am a motion graphics and interactive designer. i have produced broadcast quality sequences on much slower macs before. i think i will get another monitor later on and more disk space when needed.

anyone else going for the dual 2.0 ?

After 3 changes with my order I finally have what I am extremely happy with, and so excited to get...

It's in my sig...
 
Does PCI-X matter? Please advise.

I'm pondering the new G5s and weighing the 1.8 and 2.0 boxes. I'm frankly leaning toward saving $500 on the 1.8 Ghz machine. That way I see it, I will be upgrading RAM and the hard drive myself later for far less money. The only key difference, other than a modest FSB and processing speed bump, is the lack of PCI-X on the low end unit.

You should know that I am still milking a six-year-old Beige G3 266 Mhz with bigger hard drive, PCI video card, Firewire/USB card and lots more RAM. When I bought the machine, I was running Lightwave, Adobe Aftereffects and other demanding apps, and that Mac was the state of the art. But my career has changed, and more recently I've just been running Quicken, web surfing and some light-weight web design.

The sad news is that I just bought my wife a Dell 4600 for her home-based business. Total was $1,000 including 17" LCD monitor. I hate Windows XP and the thing is a pain in the butt, but compatibility woes forced me to join the dark side.

On the other hand, my daughter loves her iBook and is now on her second iPod. Now it's time for me to buy myself a new Mac. Because my demands have waned over the years, my chief concern is the longevity of my investment that Apple represents. That Dell will be useless in three years, based on the Windows machines that I'm forced to use at work, while my six-year-old Mac is just now becoming frustrating to use.

My instinct is to pay more for the 2.0 G5, because in four years I will be shopping for upgrades that take advantage of the 133 Mhz PCI-X slot. Then again, I suspect that Rev. C of the G5s will jump to PCI Express. While I am certain the 33 Mhz PCI slots will be downright ancient in no time, I wonder if PCI-X is destined for history's dustbin, with no vendors supporting the thing in just a few years.

Any strong opinions or fortune tellers out there? Thanks in advance.
 
jsw said:
I've installed RAM on every Mac I've owned (IIci, G3 PB, 17" iMac, G5). I have never - not once - had a single problem. The only real caveat is to have clean hands and to ground yourself (in the electrical, as opposed to spiritual, sense).

Also, you could buy cheaper RAM and then pay a technician to install it for you.

Or, if you ever can get north of Boston (Burlington or thereabouts?), I'd happily install it for you.



You ever try to install RAM on a PowerMac 7100av ???
 
jsw said:
I've installed RAM on every Mac I've owned (IIci, G3 PB, 17" iMac, G5). I have never - not once - had a single problem. The only real caveat is to have clean hands and to ground yourself (in the electrical, as opposed to spiritual, sense).

Also, you could buy cheaper RAM and then pay a technician to install it for you.

Or, if you ever can get north of Boston (Burlington or thereabouts?), I'd happily install it for you.


Actually, I had one that was a pain - the Mac Plus. You needed a special tool to crack the case and the RAM was close to the capacitors that held a zillion volts of electricity for the monitor.

If I remember correctly (it has been a few years, after all), you also had to cut a resistor lead to increase the RAM capacity.

Of course, that's nitpicking. Since around 1988 (at least), everything you say has been true. It's trivial.
 
MrSugar said:
You can purchase nice video cards from 3rd party vendors and put them in yourself. :) And no, it won't intrude on the PCI slot. I was just at ATI's site, the 9800 Mac SE with 256 megs of vRAM still only takes up the AGP slot and doesn't intrude on any neighboring slots.

Are you going by the photo on the ATI site or from personal experience? I just spoke to someone at Powermax and they said that the 9800 SE also intrudes on the PCI slot.
 
machinehead said:
Good point. Even if the performance of the 90nm 2.0 GHz G5 chips is the same, the wattage is said to be about two-thirds of the older 130nm chips. That would a big plus for reducing the room heating effect, which is quite noticeable in the summer.


Just wanted to expand on this because it's likely to lead to questions.

You are correct that the total wattage consumed by a 90 nm chip will be less than a 130 nm chip if everything else is the same (processor design, GHz rating, etc). That's the total energy being consumed - which means it's also the total energy that must be dissipated into the room.

The reason why you need more sophisticated cooling on the 2.5 G5 is two fold:

1. Energy consumption increases by greater than linear amounts with clock speed. So, the energy consumed by the 2.5 is more than 25% greater than the energy consumed by the 2.0 - when both use the same process conditions. You can look up the exact figures on the IBM site.

2. Energy DENSITY becomes the limiting factor on 90 nm chips, not total energy consumption. Let's use your ratio above (90 nm uses 2/3 of the energy of 130 nm). I'll make up figures because I'm too lazy to look them up, but they'll give an idea. We'll stick to a single chip speed for comparison.

Let's say that the 90 nm uses 60 W compared to 90 W for the 130 nm. And let's say that the chip area is proportional to the square of the line width (it isn't, but it will give an idea of what I'm talking about). So, the 90 nm has an area of 8.1 units while the 130 nm has an area of 16.9 units - or just over twice the area.

So, you have 60/8.1 or 7.4 watts per unit of area for the 90 nm chip vs only 5.3 watts per unit of area for the 130 nm chip.

In cases where heat transfer from the chip to the heat sink is limiting, this number (watts per unit area) is what really matters. So, it's actually easier to cool the 130 nm chip.

But, two factors enter into the equation to override that simplest analysis.

1. Eventually, you exceed your ability to remove the total heat content (the heat sink becomes too big, etc). When you reach the limit where total heat matters, going to a smaller chip is beneficial.

2. As you ramp up clock speed, you eventually generate such a high energy density that you can't remove the heat well enough with conventional technology - such as the G5/2.5. Even if the G5/2.5/90 doesn't generate more heat overall than teh G5/2.0/130, it's much more concentrated - and therefore harder to remove.

I doubt if I've done a very good job of explaining this, but perhaps readers will get it. The bottom line is that it's not a simple matter of 'which chip generates less heat'. It's a matter of considering both total heat generation (watts) and energy density (watts per unit area).
 
Studio Dweller said:
Are you going by the photo on the ATI site or from personal experience? I just spoke to someone at Powermax and they said that the 9800 SE also intrudes on the PCI slot.

I was just going off of the photo. I suppose I could be wrong because I don't know from personal expirience. I will look into it more, my thinking is that it only takes up one slot. You should ask those people specifically if one of them is using the card, then you could know for sure. What do you need all the PCI slots for anyway?
 
jragosta said:
I doubt if I've done a very good job of explaining this, but perhaps readers will get it. The bottom line is that it's not a simple matter of 'which chip generates less heat'. It's a matter of considering both total heat generation (watts) and energy density (watts per unit area).

No, your explanation is very clear; thanks. And it's been stated elsewhere that energy density (rather than total watts) is the reason the 90nm, 2.5 GHz 970FX chip required liquid cooling.

Apparently the 2.0 GHz version of the 90nm 970FX, introduced for the Xserve a few months ago, doesn't require liquid cooling. Somewhere between 2.0 and 2.5 GHz, the power density goes beyond what can be handled by an air-cooled heat sink.

The info I was able to find in a Google search is very sketchy. But there were very early projections that the 970FX chip might require as little as 24.5 watts ... which would be fine for later laptop use, one would think.

But later reports cited a figure of 62 watts ... MUCH higher. Put a 60 watt light bulb in a close-fitting plastic box, and it will melt it. BAD for laptop (PowerBook) use.

I lack perspective on the nitty-gritty of chip design/production. But I'd think that at the design concept stage, the expected power consumption can be roughly estimated, knowing the die size, the transistor count, and other process details.

Then when the actual chip is built and tested, they measure the power consumption, and it "is what it is" ... no way to change it then, I suppose.

But would it really turn out 2 or 3 times higher than expected ... so as to require an unexpected last-minute liquid cooling add-on ... and to throw the whole feasibility of future laptop use into question?

Hey, I'm just speculating. And if that is what happened, they aren't going to talk about it. But maybe someone's been there & done that, and can read between the lines. Where do Apple/IBM go from here with this hot [pun intended] 970FX chip?
 
MrSugar said:
I was just going off of the photo. I suppose I could be wrong because I don't know from personal expirience. I will look into it more, my thinking is that it only takes up one slot. You should ask those people specifically if one of them is using the card, then you could know for sure. What do you need all the PCI slots for anyway?

The photo doesn't look like it sticks out that much, but it's hard to be sure. I wish I could get a definitive answer so perhaps I'll try emailing ATI.

Regarding the need for the PCI slots, I presently have a G4 800 MHz with four slots which are being used by a 3-card Pro Tools HD3 system and a SCSI card for a DDS4 tape backup. As it stands with the G5, I'll be short a slot so I plan on keeping my G4 to handle the daily backups and possibly to help out on large Lightwave renders.
 
jragosta said:
Just wanted to expand on this because it's likely to lead to questions.

You are correct that the total wattage consumed by a 90 nm chip will be less than a 130 nm chip if everything else is the same (processor design, GHz rating, etc). That's the total energy being consumed - which means it's also the total energy that must be dissipated into the room.

The reason why you need more sophisticated cooling on the 2.5 G5 is two fold:

1. Energy consumption increases by greater than linear amounts with clock speed. So, the energy consumed by the 2.5 is more than 25% greater than the energy consumed by the 2.0 - when both use the same process conditions. You can look up the exact figures on the IBM site.

2. Energy DENSITY becomes the limiting factor on 90 nm chips, not total energy consumption. Let's use your ratio above (90 nm uses 2/3 of the energy of 130 nm). I'll make up figures because I'm too lazy to look them up, but they'll give an idea. We'll stick to a single chip speed for comparison.

Let's say that the 90 nm uses 60 W compared to 90 W for the 130 nm. And let's say that the chip area is proportional to the square of the line width (it isn't, but it will give an idea of what I'm talking about). So, the 90 nm has an area of 8.1 units while the 130 nm has an area of 16.9 units - or just over twice the area.

So, you have 60/8.1 or 7.4 watts per unit of area for the 90 nm chip vs only 5.3 watts per unit of area for the 130 nm chip.

In cases where heat transfer from the chip to the heat sink is limiting, this number (watts per unit area) is what really matters. So, it's actually easier to cool the 130 nm chip.

But, two factors enter into the equation to override that simplest analysis.

1. Eventually, you exceed your ability to remove the total heat content (the heat sink becomes too big, etc). When you reach the limit where total heat matters, going to a smaller chip is beneficial.

2. As you ramp up clock speed, you eventually generate such a high energy density that you can't remove the heat well enough with conventional technology - such as the G5/2.5. Even if the G5/2.5/90 doesn't generate more heat overall than teh G5/2.0/130, it's much more concentrated - and therefore harder to remove.

I doubt if I've done a very good job of explaining this, but perhaps readers will get it. The bottom line is that it's not a simple matter of 'which chip generates less heat'. It's a matter of considering both total heat generation (watts) and energy density (watts per unit area).

Nice post, it did help clear things up for me a lot actually.
 
I was reading the PowerMac page on Apple.com and read somewhere that the graphics card DOES take up one PCI slot. I cannot find the link however, but I know I read that...
 
Studio Dweller said:
The photo doesn't look like it sticks out that much, but it's hard to be sure. I wish I could get a definitive answer so perhaps I'll try emailing ATI.

Regarding the need for the PCI slots, I presently have a G4 800 MHz with four slots which are being used by a 3-card Pro Tools HD3 system and a SCSI card for a DDS4 tape backup. As it stands with the G5, I'll be short a slot so I plan on keeping my G4 to handle the daily backups and possibly to help out on large Lightwave renders.

Gottcha. Well, the pic does make it look like it fits in one slot, but who knows. If you email ATI let me know what you find out, I would be curious to know. In my opinion -- power to space -- ratio is rather important when it comes to internal cards.
 
machinehead said:
No, your explanation is very clear; thanks. And it's been stated elsewhere that energy density (rather than total watts) is the reason the 90nm, 2.5 GHz 970FX chip required liquid cooling.

Apparently the 2.0 GHz version of the 90nm 970FX, introduced for the Xserve a few months ago, doesn't require liquid cooling. Somewhere between 2.0 and 2.5 GHz, the power density goes beyond what can be handled by an air-cooled heat sink.

The info I was able to find in a Google search is very sketchy. But there were very early projections that the 970FX chip might require as little as 24.5 watts ... which would be fine for later laptop use, one would think.

But later reports cited a figure of 62 watts ... MUCH higher. Put a 60 watt light bulb in a close-fitting plastic box, and it will melt it. BAD for laptop (PowerBook) use.

I lack perspective on the nitty-gritty of chip design/production. But I'd think that at the design concept stage, the expected power consumption can be roughly estimated, knowing the die size, the transistor count, and other process details.

Then when the actual chip is built and tested, they measure the power consumption, and it "is what it is" ... no way to change it then, I suppose.

But would it really turn out 2 or 3 times higher than expected ... so as to require an unexpected last-minute liquid cooling add-on ... and to throw the whole feasibility of future laptop use into question?

Hey, I'm just speculating. And if that is what happened, they aren't going to talk about it. But maybe someone's been there & done that, and can read between the lines. Where do Apple/IBM go from here with this hot [pun intended] 970FX chip?


My guess is that you're looking at a chip speed difference. Doubling clock speed more than doubles power consumption. So if the 2 GHz ie 64 watts, it's quite possible that somewhere around 1.4 GHz would be 25 watts - which would make it suitable for a laptop.

Unfortunately, there's not a compelling marketing advantage for 1.4 GHz G5 over 1.5 GHz G4 - so they're not quite ready to use the G5 in a PowerBook yet.
 
Love That Dual 1.8

Great price point for an Apple Dual Processor.
However, I just bought the 17'Powerbook so I can only lust after this machine. This size screen on a laptop really helps with Dreamweaver and JBuilder X for Mac.

Concerning which machine to buy,
If you can afford it, and get it past the wife, if that's an issue, I'd consider the Dual 1.8 now, and upgrade to the G6 when that comes out. In a Year?
I bought my lottery ticket.
;)
 
MikeBike said:
Great price point for an Apple Dual Processor.
However, I just bought the 17'Powerbook so I can only lust after this machine. This size screen on a laptop really helps with Dreamweaver and JBuilder X for Mac.

Concerning which machine to buy,
If you can afford it, and get it past the wife, if that's an issue, I'd consider the Dual 1.8 now, and upgrade to the G6 when that comes out. In a Year?
I bought my lottery ticket.
;)

That was one option I was considering - with refurb 1.8s running $1700 or so.

But, after considering it, I decided I'd rather just wait and use the money I saved for a faster G6 next year.

Of course, if you were buying for personal use and could consider resale values, the equation would change. But I'm not - I'm looking for a business computer, so the cost of buying a low end machine today would be lost.

Plus, we haven't fully depreciated my dual G4 yet, so I'd have to write that one off if I bought a new one today - increasing the cost even more.
 
Finally ordered mine, check out the Sig. Can't wait this baby is going to fly, especially when i put more RAM in it when i order it elsewhere.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.