Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Re: RenderMan Results?

Originally posted by fpnc
Seems like no one noticed or has yet to comment on the fact that the screen shot showing RenderMan results indicates that the 2.0 GHz G5 was only slightly faster than the Xeon running at 2.8GHz (also, I guess, a dual processor). What happened to the 2X advantage that Apple demoed at WWDC?

I can't remember anyone saying the 2ghz g5 was twice as fast as the 3 gig xeon.

Actually the xeon single proc was tested faster on specint, and slightly slower on specfp.

It's still interesting that a proc running 1 ghz slower, is still 15% faster in getting the job done.
 
Originally posted by Geetar
And that'll be...Toy Story 3? They've been itching to do that, but their Disney distribution deal doesn't make that a clever film for Pixar to make right now. But it'll be a sure-fire success, and it'll be made on the Mac.

Disney is riding Pixar's coattails. Pixar knows that, and so does Disney. Right now Disney gets 50% of the profits (or something like that), but Pixar is now big enough to survive on their own, and they only owe Disney another 2-3 movies (its contractual). They're making a movie called "Cars" and that should be the last of the Pixar-Disney movies before Pixar starts making the movies completely without Disneys unnecessary presence.
 
Actually, its Disney who wants to capitalize on the Toy Story 3 idea, and Pixar doesnt want to do it because it lacks creativity. They just dont want to do another sequel (I wouldnt either)

Pixar has moved on, and wants to push the creative envelope (as they did with A Bugs Life, Monsters, Nemo) but Disney is just interested in the cash cow.
 
Re: Re: RenderMan Results?

Originally posted by visor
I can't remember anyone saying the 2ghz g5 was twice as fast as the 3 gig xeon.

Actually the xeon single proc was tested faster on specint, and slightly slower on specfp.

It's still interesting that a proc running 1 ghz slower, is still 15% faster in getting the job done.

Here is how they stack up in the SPECfp_rate2000:
Apple G5 2GHz PowerPC 970 x 2: 15.7
Dell Precession 650 3.06GHz Xeon x 1: 13.6
Dell Precession 650 3.06GHz Xeon x 2: 17.3
Dell 1750 3.06GHz Xeon x 2: 19.6
IBM Power 4 1450MHz x 2: 19.6
Sun Blade 2000 1.2GHz USIII x 2: 19.7
Opteron 144 1.8GHz x 2: 24.7
Intel Itanium 2 1.5GHz x 2: 37.3
 
Originally posted by kcmac
How much did the Intel renderfarm cost Pixar? So far, they are splitting $320M dollars with Disney on Nemo. This will only go much higher when the DVD's go public.

I can't see that a renderfarm is more than a drop in the bucket, a tax write off, a business expense. Heck, they could probably donate it and write the whole thing off. Since I am not an accountant or knowledgeable about costs, I could be sooooo wrong. But come on! :p

This is bad business.

You don't ignore an expense just because it is smaller than your revenues.

The real question is this:

given their existing renderfarm, how much would it cost to replace that renderfarm with this year's hardware (and, possibly, a new underlying architecture ... ie, PPC instead of IA-32) (yes, this figure includes the tax deduction from donating the old hardware or the income from selling it to someone else)? Compare that to how much you will lose in productivity and time-to-market by staying with last-year's hardware.

That is the sole business question. If it is better profits-wise to stick with a fleet of Dells and an Intel-based render farm, then that is what Pixar should do. If Pixar moves to OS X on their workstations *and* their render farm, it will be because the increased productivity of their engineers on the workstation and power-per-dollar of the G5 in the farm offsets the hardware and migration costs.
 
Re: Re: Re: Re: Pixar & Apple

Originally posted by Lanbrown
SGI is also moving towards the Itanic, so they will just be another peecee manufacturer. It won't be hard for them to find a market; they should actually hurt the other peecee companies. If you want Itanic, you can go with a company that can build large systems and can do clustering, or you can go with someone that only does cluster setups and have little real world experience.

If you don't want Itanic, SGI will be out.

SGI is also much smaller then what it once was. Their current CPU is really showing its age and has not been updated for sometime now, just a faster in terms of MHz.

What do you mean they are moving towards the Itanic? SGI have 1 Itanium based Linux server. That's it. The rest of their product line is all MIPS/IRIX. While it's true a few years ago they started making Intel based Irix workstations, thay have recognized the error of their ways and gone back to MIPS/IRIX. I'll bet that soon the Itanium will be gone as well.

If you think the MIPS processor is slow YOU'RE INSANE! The reason it hasn't been updated much is that it doesn't need to be. It's a server/workstation chip like the IBM Power4. These chips are made to have a extremely long life. They are aimed at industries that upgrade every 10+ years and who need to know that their equipment will still be supported for that long.

SGI
MIPS
 
EVERYONE is FREAKING out about this rumor. The truth is, we dont know if Apple is going to replace their intel servers with G5-powered X-serves, and we dont know how extensive they will even be using the G5.

The only thing we really know is that they have ported RenderMan to OSX, and are considering (via a rumor from a random MacCentral poster) switching to the G5 as a workstation, or maybe more.

Maybe it's on a limited basis, or maybe it over the entire production pipeline. Bottom line is, just enjoy the fact that Apple has FINALLY made a piece of hardware worthy of Pixars consideration and interest.

Regardless of who their CEO is, Pixar's only concern is productivity and fast turnaround, not their loyalties to the hardware/software they are using. Therefore, if its Apple, no one should be able to accuse Pixar of doing this as a marketing ploy. They've done JUST FINE without using Apple for years and years.
 
Re: Re: RenderMan Results?

Originally posted by visor
I can't remember anyone saying the 2ghz g5 was twice as fast as the 3 gig xeon.

Actually the xeon single proc was tested faster on specint, and slightly slower on specfp.

It's still interesting that a proc running 1 ghz slower, is still 15% faster in getting the job done.

How about the Xeon being 30% slower in SPECfp and 5% faster in SPECint. Besides, I think he was referring to the "real world" benchmarks.

The Photoshop comparison showed the Dual 2 GHz G5 being *exactly* twice as fast as a Dual 3.06 GHz Xeon, the BLAST comparison showed it being three times as fast, and finally we have HMMer in which the Dual 2 GHz G5 was at least four times as fast and the Single 1.6 GHz G5 was well over twice as fast as a Dual Xeon.
 
Re: Re: Re: Re: Pixar & Apple

Originally posted by Lanbrown
SGI is also moving towards the Itanic, so they will just be another peecee manufacturer. It won't be hard for them to find a market; they should actually hurt the other peecee companies. If you want Itanic, you can go with a company that can build large systems and can do clustering, or you can go with someone that only does cluster setups and have little real world experience.

If you don't want Itanic, SGI will be out.

SGI is also much smaller then what it once was. Their current CPU is really showing its age and has not been updated for sometime now, just a faster in terms of MHz.
.
Exactly what does the Itanium have to do with the x86 pc market?
 
Re: Re: Re: Re: Renderman

Originally posted by Quila
Read the interview with the PPC 970's chief designer. He talks about compiler problems.

True, but I don't think the were the "optimizations" that people were talking about when they called Apple cheats - though I could be wrong: there were so many accusations of cheating it's hard to keep track! :)

By the way, I certainly wouldn't consider those compiler changes cheating, if those changes make into the released gcc. It's hardly surprising a transition from the G4 series to the 970 would require a similar transition for its compilers.

Mike.
 
Re: Re: Re: Re: Re: Renderman

Originally posted by whooley
By the way, I certainly wouldn't consider those compiler changes cheating, if those changes make into the released gcc. It's hardly surprising a transition from the G4 series to the 970 would require a similar transition for its compilers.


I agree, using the best compiler and switches is important for every platform.

The underhanded tactic of Apple's was to use such a compiler for the G5, yet not use a similarly optimized compiler for the Xeon.

If they wanted to be fair, they could have used the best compiler on each platform. Or conversely, they could have used GCC 3.2 - which had sub-par optimizations for both x86 and PPC.

But no, they used GCC 3.3 which has a ton of special tweaks written by chip vendor IBM -- but they didn't use the other compiler with a ton of tweaks by chip vendor Intel.

That's the issue of fairness - using IBM's tweaks but not Intel's....
 
Re: RenderMan Results?

Originally posted by fpnc
Well, this is a minor issue and we're only talking about a single benchmark result, but we should be accurate. The G5 was only 12% faster in the RenderMan benchmark, not 20%. Therefore, a 3.2 GHz Xeon would scale to being just slightly faster than the 2 GHz G5. However, I don't believe that Xeons are available at 3.2 GHz (they still max out at 3.06 GHz). The 3.2 GHz parts are Pentium 4's.

Do the math:

G5 @ 2.0 GHz: 216 seconds
Xeon @ 2.8 GHz: 246 seconds

(246 - 216)
--------------- = 0.12
246

or

(1.00 - 0.12) x 246 = 216

Nothing to get real excited about, but we should try to be accurate in our posts.

The new 3.06 GHz Xeon with 1 meg on die cache actually performs slightly better than a Pentium 4 3.2 GHz with a 800 MHz FSB in SPECint and slightly worse in SPECfp if it is any indication as to how it would perform in renderman.

Xeon DP 3.06, 1 MB On-Die L3 Cache
SPECint Base:1242
SPECfp Base: 1173

Xeon DP 2.80
SPECint Base: 1022
SPECfp Base: 1010

All in all a 21% improvement in SPECint base and a 16% improvement in SPECfp base. Looking at how newer ICC compilers affected performance*, it is doubtful that the newer compiler played a significant role in improving that score. Intel is planning to come out with a 3.2 GHz version of the Xeon with L3 cache soon.

I don't know if either of these is indicative of how a 3.06 GHz Xeon would perform in Renderman.

*:link below
https://forums.macrumors.com/showthread.php?s=&threadid=31775&perpage=&pagenumber=2
 
Re: Re: Re: Re: Re: Pixar & Apple

Originally posted by JBracy
What do you mean they are moving towards the Itanic? SGI have 1 Itanium based Linux server. That's it. The rest of their product line is all MIPS/IRIX. While it's true a few years ago they started making Intel based Irix workstations, thay have recognized the error of their ways and gone back to MIPS/IRIX. I'll bet that soon the Itanium will be gone as well.

If you think the MIPS processor is slow YOU'RE INSANE! The reason it hasn't been updated much is that it doesn't need to be. It's a server/workstation chip like the IBM Power4. These chips are made to have a extremely long life. They are aimed at industries that upgrade every 10+ years and who need to know that their equipment will still be supported for that long.

SGI
MIPS

Those MIPS processors can be found inside communication equipment as well.

SGI Origin 200 with 32 R14000A CPU's rates a 153 on the SPECfp-rate2000. Their Altix 3000 with 32 1GHz Itanium 2's gets 443. Almost three times the performance.

SPECfp2000:
SGI Origin 3200 with one 600MHz R14k gets a 529
SGI Altix 3000 (1500MHz, Itanium 2) gets a 2055

SPECint2000:
SGI Origin 3200 1X 600MHz R14k gets a 500
SGI Altix 3000 (1500MHz, Itanium 2) gets a 1077

Selling the same chip for that long is INSANE. The Power4 is compatible with the Power5 and they are compatible with the Power3. They are not the same chip though; just the like Ultra SPARC III is not the same as the US III.
 
Re: Re: Re: Re: Re: Re: Pixar & Apple

Originally posted by Lanbrown
Those MIPS processors can be found inside communication equipment as well.

SGI Origin 200 with 32 R14000A CPU's rates a 153 on the SPECfp-rate2000. Their Altix 3000 with 32 1GHz Itanium 2's gets 443. Almost three times the performance.

SPECfp2000:
SGI Origin 3200 with one 600MHz R14k gets a 529
SGI Altix 3000 (1500MHz, Itanium 2) gets a 2055

SPECint2000:
SGI Origin 3200 1X 600MHz R14k gets a 500
SGI Altix 3000 (1500MHz, Itanium 2) gets a 1077

Selling the same chip for that long is INSANE. The Power4 is compatible with the Power5 and they are compatible with the Power3. They are not the same chip though; just the like Ultra SPARC III is not the same as the US III.

Yes, the Itanium is faster, but that was not my point if you read my post. The MIPS is not a slouch.

Also, I think you will find that the Sun Ultra Sparc III is exactly the same as the US III. :)
 
Also don't forget that SGI doesn't have all of it's hopes in Hardware. They also own Alias (Maya) and are the driving force behind OpenGL.

SGI - like Apple - is a niche player, and they are the best at what they do. At a company I worked for in London (A busy content creation / pre-press house), we replaced about 10 of our SUN Servers with 2 SGI's. We went from about 2 crashes per month on average to never being down in over 2 years (maybe more, but I left after 2 years)!
 
Originally posted by JBracy
Also don't forget that SGI doesn't have all of it's hopes in Hardware. They also own Alias (Maya) and are the driving force behind OpenGL.

Alias is a subsidiary of SGI
 
What do you mean they are moving towards the Itanic? SGI have 1 Itanium based Linux server. That's it. The rest of their product line is all MIPS/IRIX. While it's true a few years ago they started making Intel based Irix workstations, thay have recognized the error of their ways and gone back to MIPS/IRIX. I'll bet that soon the Itanium will be gone as well.

He does have a point there, both SGI and HP have been trying to migrate the customers of their "house brand" RISC chips to IA64.

The customer base for both cores (PA-RISC and R10K) have been shrinking significantly in recent years and the only reason both companies continue "recycling" these half decade old cores (rather than spend the resources to develop a new core), is because the remaining customer base is still economically important to them. In other words, they don't want to lose any of their current customers (to another vendor) but they don't want any new customers either unless it's for IA64.
 
Originally posted by Cubeboy
In other words, they don't want to lose their old customers (to another vendor) but they don't want any more customers for their respective RISC chips either..

Then why did SGI stop selling Intel/Linux based workstations and just released new MIPS/IRIX based ones? The SGI Tezro

If they don't want new customers for their MIPS/IRIX platform then why spend the R&D on new systems?

If they want to move them to a new platform then they need to actually sell a platform to move them to!

It's like saying that Apple doesn't want new customers they only want to support the ones they already have!
 
Pixlet

What I'm really interested in seeing is how well Pixlet performs on my 1gHz TiBook. Fcheck seems to play full frame animation pretty well, but it drops a frame here and there to maintain framerate. I would love to be able to fire up quicktime, and open a frame sequence and it play back in real time.........

Can you say aaaaaaaaaaaaaaaaaaaahhhhhhhhhhhhh.........
 
Re: Re: Re: RenderMan Results?

Originally posted by Lanbrown
Here is how they stack up in the SPECfp_rate2000:
Apple G5 2GHz PowerPC 970 x 2: 15.7
Dell Precession 650 3.06GHz Xeon x 1: 13.6
Dell Precession 650 3.06GHz Xeon x 2: 17.3
Dell 1750 3.06GHz Xeon x 2: 19.6
IBM Power 4 1450MHz x 2: 19.6
Sun Blade 2000 1.2GHz USIII x 2: 19.7
Opteron 144 1.8GHz x 2: 24.7
Intel Itanium 2 1.5GHz x 2: 37.3

Let's be a bit careful here. Apple's listing here is done using gcc/NAGWare Fortran90 compiler virtually "out-of-box", this is not standard practice (all the other listings) in the industry. Standard practice is to report the best result using any compiler/tweak/OS necessary to achieve it--you cannot modify the hardware though since SPEC is a system test, not a CPU one.

Apple's result should be not listed, or listed with an asterix since the actual number would be much higher.

Take care,

terry
 
Re: ???

Originally posted by paulc
Still a tad confused. Pixar obviously knew all about the G5 when they decided their render farm was going to be intel blades. And as far as I can tell, they are still goling that way for rendering. Sort of implying that while they may choose G5's for workstations, when flat out constant speed is needed, intel is still the way to go.

I would guess it's a matter of cost, scalability, and availability. Pixar needed more rendering speed ASAP. We're talking around 1000+ CPU's. When Pixar bought the Intel-based blade servers, Apple was probably 2 years away from offering them something comparable in speed AND cost.

HSMM Imaging Studio
 
Specs and Statistics

Apple is the only one currently not using whatever compiler that gives its processors the best performance rating.

*Everybody* else uses what makes them look best.

Apple, in the past, did use what made them look better.

But now they're using GCC. And GCC is not reporting the G5 anywhere near it's true speed, it's not optimized. But the SIMDs for the other chips aren't omptimized in GCC either, though they are in ICC.

If you looked carefully, and also listen to windows customers, Windows pcs *in use* are almost never as fast as their specs claim.

There was an extensive article about this posted here, and I doubt many people read it.

Specs these days are very much like statistics.

"There are three kinds of lies. Lied, damned lies, and statistics."
-- Samuel Clemens

Jaedreth
 
Re: Re: Re: Re: Re: Re: Renderman

Originally posted by AidenShaw
The underhanded tactic of Apple's was to use such a compiler for the G5, yet not use a similarly optimized compiler for the Xeon.

If they wanted to be fair, they could have used the best compiler on each platform. Or conversely, they could have used GCC 3.2 - which had sub-par optimizations for both x86 and PPC.

But no, they used GCC 3.3 which has a ton of special tweaks written by chip vendor IBM -- but they didn't use the other compiler with a ton of tweaks by chip vendor Intel.

Not true. Show me that "using the best compiler" is "fair"--most people think that doing so is part of what is wrong with benchmarking in general. Show me the evidence that any version of GCC, is better on PPC than x86--most experts think the reverse is true and that, at best, GCC3.3 is a little closer to platform parity than earlier versions. Show me the "ton of tweaks" Apple used--there were only two, both may have been necessary to get the thing to compile at all.

You are confusing IBMs intention to submit tweaks to GCC (most of which involve the fact that the 970 has an interesting way of grouping code that is unique to that processor and a rumor that they may submit autovectorization a la ICC) none of which were actually submitted and none of which would necessarily be accepted due to GCC's design goals (portability), and Apple's misguided intent to "normalize out" the compiler (whatever that means).

The only thing underhanded here is how so many news sites spread such ill-informed B.S. about Apple "juicing" their benchmark that now it is accepted conventional wisdom.
 
GCC tweaks?

Yes, IBM did redo GCC a little. Why? Because the G4 version just wouldn't compile on a G5. Period.

It *does* handle some data forms properly.

And guess what? They had to *rig* it. The only way to make it run properly according to traditional proper programming conventions would have required a complete rewrite of GCC for G5. They didn't have the time.

So they used some coding tricks that allowed the compiler to guestimate how the G5 was handling the data, but they had to get damned inventive. Their method was horribly inefficient and far from optimized.

But it made GCC run, and run "properly".

So if anything, Gcc on G5 is running like a G4 with the left hand tied behind it's back...

And the G5 still kicked ass.

Jaedreth
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.