Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Did you give it to them with the memory and the extra ODD? If you didn't then what happens if it was the RAM or ODD either directly or indirectly that was causing the trouble? :p
 
Did you give it to them with the memory and the extra ODD? If you didn't then what happens if it was the RAM or ODD either directly or indirectly that was causing the trouble? :p
:D Even if it was, it wouldn't return, from some other posts floating around..."sent it in, and it was missing x,y,z I added". :eek: :p
 
Hopefully you will get a new machine, That'd would be the right thing of apple to do.


On a side note, if you dont use the full 16gb of ram, try and opt for 12Gb as the triple channel effect will speed up your system considerably. With 16gb of ram you lose the speed boost of triple channel ram.

Also, the 2.93ghz has turbo boos built in(all the nehalems in the mac pro's do) which will effectively let your system run at 3.2ghz if needed. Head on over to intel's website and look up the xeon nehalem processors for more info, they have sum good videos on there about them.

Best of luck man, hopefully youll be getting a new system soon.
 
On a side note, if you dont use the full 16gb of ram, try and opt for 12Gb as the triple channel effect will speed up your system considerably. With 16gb of ram you lose the speed boost of triple channel ram.
From a technical standpoint, leaving the 4th DIMM socket empty will allow triple channel operation.

Unfortunately, most usage patterns won't benefit from triple channel, so the extra capacity (dual channel) would be a better way to go. As always, the details are dependant on the specifics though. But such a general statement isn't completely accurate.

Also, the 2.93ghz has turbo boos built in(all the nehalems in the mac pro's do) which will effectively let your system run at 3.2ghz if needed. Head on over to intel's website and look up the xeon nehalem processors for more info, they have sum good videos on there about them.
Turbo Mode is useful if the application is single threaded, and the other cores aren't used/very light load. Otherwise, it's too hot, and TM won't kick in. :(

It has limits. ;) But is still useful for it's intended purpose, especially as many applications are single threaded. :)
 
Did you give it to them with the memory and the extra ODD? If you didn't then what happens if it was the RAM or ODD either directly or indirectly that was causing the trouble? :p

yep , decided not to mess about mate , i just want the problem sorted , if it means losing the extras - so be it .
 
cheers for your reply's again JamesGorman/nanofrog/Tesselator -

not being entirely au fait with the more technical aspects of the MP , it's great to learn from you guys .
tbh i still can't get my head around how Apple can release a new line of MP's that aren't significantly better than the previous gen. :confused:

i'm primarily using Logic Pro / Ableton Live - does anyone know whether these take advantage of the Nehalems 'hyperthreading' capability i've seen mentioned ?

NB still waiting on my test results !
 
sound kev said:
tbh i still can't get my head around how Apple can release a new line of MP's that aren't significantly better than the previous gen.

well, they're not slower per se...
(someone please correct me on the following if i'm wrong)

there's a number of reasons...while the Harpertowns have higher clock speeds, the Nehalems have triple-channel RAM, no FSB, hyper-threading and turbo-boost. Not to mention that most software is still optimised for the Core2 technology.

So as an all-round system, the nehalems are faster...but if you're doing intense single-threaded apps, the old Harpertowns still beat them ;)
 
Yeah, that sounds right to me too. Per clock they're like 5% ~ 20% faster depending on the kind of operations... But then Apple selects to scale each respective model down a CPU model so the end result is a machine that is about the same speed in general, faster at VERY few specialized things - like 3D rendering, and slower and a few things too.

But also keep in mind that Apple has to follow the Intel roadmap if they're going to stay with Intel chips, so it's really not their choice. The few choices Apple did make were all terrible IMO. Only 4 DIMM slots per proc.? bumping down the speed of each respective MP model? Opting to release a Quad core system not user upgradable to 8? Raising the prices $1,000 to $3,000 when you actually compare the proper speed relations between 08 & 09?

Trends would normally dictate a relative performance and spec increase per price-point. This year we got the opposite: A relative price increase per performance point. :( To me it looks like a repeat of the behavior that almost bankrupt them a decade or so ago - But we've all read and discussed these things to death I suppose. We've had 3 years of goodness so let's hope this 4th year is just an odd year out. Rumors have it that Intel will be dropping prices on future new models so maybe Apple will as well and self-correct their poor judgement. <shrug>
 
Maybe its time for apple to offer AMD's Opteron chips. Yeah, I know apple has a sweetheart deal with intel. To offer both would keep chip prices down with the competition, IMHO.
 
I'm like you, I also struck the refurbished jackpot years ago with the 2006 Mac Pro.

My recommendation is to take out the extras, especially the ram, if you want to keep them.
Anyway, when you send it in, they do take down the specs of the computer, or you can insist they take down the specs of the computer.

If Apple can repair, they will repair. If not, they would exchange it for a new set, refurbished or brand new doesn't really matter as long as it works.

As for the speed difference for the expected new set...Does it matter? Is the difference in speed gonna be that huge? This really depends on what you use your Mac Pro for. I say don't bother if you're not a 3D modeler or use processor intensive applications.
 
Maybe its time for apple to offer AMD's Opteron chips. Yeah, I know apple has a sweetheart deal with intel. To offer both would keep chip prices down with the competition, IMHO.
I don't see this happening in the near future. :p
 
I don't think they will give you a new machine, probably just replace what ever was toast. They will have spare parts for those boxes for another year or so at the factories most likely.
 
They said the same thing about the PPC too. All depends on the relationship between intel and apple. Then again, apple owning there own chip plant could yield something in the near future too.
Relationships are certainly part of it, but I'd think a switch to AMD would mean a substantial amount of rework of OS X (architectual differences), even though SL will be 64 bit (including kernel this time), and it uses the same instruction set (licensed theirs to Intel). More work than it's worth IMO. Then there's the performance... :p

As per Apple owning their own fab & employs designers, it's aimed at mobile devices (current aim of the fab & licensed architecture). Keep in mind the economy of scale works for the mobile market, but not for the MP. It's less expensive for Apple to let Intel (or any other chip maker) incur the design costs, and divide it amongst many more parts (sold to multiple vendors), in the case of Server/Workstation parts. Apple can't afford this for a small market. The final product would end up way too expensive.

It could work out in favor of the iMac/MB/MBP... as well (some future processor line for laptops).
 
...I'd think a switch to AMD would mean a substantial amount of rework of OS X (architectual differences), even though SL will be 64 bit (including kernel this time), and it uses the same instruction set (licensed theirs to Intel).
What rework? Both processors support the same instruction set. I can run any x86 or x86-64 OS on an AMD or Intel-based box with a 64-bit processor.

S-
 
What rework? Both processors support the same instruction set. I can run any x86 or x86-64 OS on an AMD or Intel-based box with a 64-bit processor.

S-
I'm thinking in terms of compling the code. There are differences, despite the instruction set, so compilers do have components for specific CPU's, lending to optimization.

Presumably, Apple's written their own compiler (or outsourced it), and it was designed around the Intel chips/chipsets. The specifics occur during the OS installation (parts needed get installed, but others may be present on media), and is transparent to the user. Not that it won't work, it just won't run as well as it should. So I'm imagining a step backwards per se. Like going from SL (supposed to be optimized) to Leopard, or worse. :p
 
Yeah, And doing something like that there WILL BE some code changes too! Probably mostly unforeseen ones as is usually the headache, err, case. If it were something like an medium sized app program then probably (maybe) not... But a complex low level app or an OS?!?! Yeah, there is going to be a bunch A BUNCH of work involved. But like someone else said earlier we don't need to worry about it cuz it ain't happening and AMDs are sloooow anyway.
 
Yeah, And doing something like that there WILL BE some code changes too! Probably mostly unforeseen ones as is usually the headache, err, case. If it were something like an medium sized app program then probably (maybe) not... But a complex low level app or an OS?!?! Yeah, there is going to be a bunch A BUNCH of work involved. But like someone else said earlier we don't need to worry about it cuz it ain't happening and AMDs are sloooow anyway.

Well... they're not THAT bad. They do virtualization very well. ;)
 
Yeah, And doing something like that there WILL BE some code changes too! Probably mostly unforeseen ones as is usually the headache, err, case. If it were something like an medium sized app program then probably (maybe) not... But a complex low level app or an OS?!?! Yeah, there is going to be a bunch A BUNCH of work involved. But like someone else said earlier we don't need to worry about it cuz it ain't happening and AMDs are sloooow anyway.

Haha. Who do you think wrote x86_64? Intel licensed it from AMD much like AMD licensed x86 from Intel. Any low level app or os built for those architectures will work on both cpu platforms. Now, not all AMD cpus support the latest SSE stuff etc, but not all Intel cpus support hardware virtualization. There are trade offs, and in the server world AMD is still a very viable option and still does well, especially with their price cuts on their new Istanbul Opterons (6core cpus).
 
Oh, I agree! It "should run". Of course the likelihood of that actually happening without having to do multiple test/debug cycles is just about nil. I think every developer will agree with that!
 
From a technical standpoint, leaving the 4th DIMM socket empty will allow triple channel operation.

Unfortunately, most usage patterns won't benefit from triple channel, so the extra capacity (dual channel) would be a better way to go. As always, the details are dependant on the specifics though. But such a general statement isn't completely accurate.
Valid Point, I didnt even think that it would revert running dual channel. But yes, with the system working like this it would make more sense to have 16gb.

EDIT: was there not a study doen by a company that showed triple channel had a 22% increase in performance or something along those lines?
 
Yeah, And doing something like that there WILL BE some code changes too! Probably mostly unforeseen ones as is usually the headache, err, case. If it were something like an medium sized app program then probably (maybe) not... But a complex low level app or an OS?!?! Yeah, there is going to be a bunch A BUNCH of work involved. But like someone else said earlier we don't need to worry about it cuz it ain't happening and AMDs are sloooow anyway.
Absolutely, but if the compiler was botched, it's an effort in futility to develop code on it. :D :p
Well... they're not THAT bad. They do virtualization very well. ;)
They certianly have their uses, no doubt. ;) Certainly as a lower cost alternative, assuming they're applicable to the task. :)
Haha. Who do you think wrote x86_64? Intel licensed it from AMD much like AMD licensed x86 from Intel. Any low level app or os built for those architectures will work on both cpu platforms. Now, not all AMD cpus support the latest SSE stuff etc, but not all Intel cpus support hardware virtualization. There are trade offs, and in the server world AMD is still a very viable option and still does well, especially with their price cuts on their new Istanbul Opterons (6core cpus).
Of course there's cross licensing. ;) And the architectual differences do apply, ported code or not.

Porting such code (same app, same OS) to another processor isn't that simple, unfortunately.

It does come down to how well the compiler works (properly), if the code produced on it will work. This is more important than the source code, as the resulting issues are out of the developer's hands (no way to know if the copiler or source is causing the problem). Even if they're aware of the compiler has issues, they can't fix it unless they developed it as well.

Assuming the compiler is in fact 100% functional, then as Tesselator mentioned, there's a really good chance the source code will also need to be adjusted. :rolleyes: :(

It's not just a simple recompile = zero errors. :p
EDIT: was there not a study doen by a company that showed triple channel had a 22% increase in performance or something along those lines?
Seems about right. :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.