Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by unsane1
First off, DOD is Domain of Depth, it is similar to Depth of Field, but contains even more information if I remember correctly.

No, DOD, is a Shake specific term, Domain of Definition. A more generic term is crop window- ie, it tells the app that there's nothing outside of the area that it has to be concerned with, other than the geometry of the frame size.
 
Re: Renderman

Regarding how the Wintel crowd see the Pixar change: many will indeed claim that Pixar is choosing a platform that will harm their business, just becuse their CEO cares more about his other company than about Pixar. Absurd, but I agree, it will be said.

But it doesn't matter that much. Others IN the film/TV industry (and in other industries too) will take the Mac more seriously when they see Pixar's lead. THEY won't think Pixar is stupid. And when THEY start adopting Macs, Wintel users won't have the Steve Jobs excuse to explain it.

Regarding render farms... Apple doesn't MAKE a blade server. Now, if an Xserve Blade appears, that's a different story. (And rendering can typically be done across a mix of OS's anyway. Keep the Dells and add Xserves too.)

Minor tangent:

Originally posted by jaedreth
The icc does have extensive optimization, and the gcc is basically just running on the G5. It's not optimized for even G4's altivec. They did have to make minor alterations to it so it would recognize the hardware properly. I read an indepth article on this elsewhere. Forgot where. But basically, just enough changes to make it run right. No altivec or anything...

Jaedreth

I've seen Windows devotees lately criticizing Apple for changing gcc to skew the test results when the G5s were demoed. They insist that Apple did not merely make minor alterations so gcc ran on the G5. Rather, they say, they tweaked it unfairly, using a speed-optimized gcc on the G5, but using plain old gcc on the Xeon. A cheat. (Not my opinion, just what I have seen stated.)

Any thorough, reputable, non-biased refutation of that fact--showing that Apple did NOT optimize gcc for their SPEC tests--would be a link I'd like to see, if anyone has a URL. (Otherwise, probably best to ignore this post rather than dig into the SPEC nonsense again!)
 
Originally posted by unsane1
Apple has the creative market by the balls, and no one really understands how. Adobe is pissed off, Microsoft is ignorant of the market, I think Apple is coming up with a pretty good plan on how to take control of more of the market, and how to do it in very key places.

All the cool people will be using them, so who cares about everyone else? ;)

A few influential recording industry friends I have here in NY, (One on the Developer A-list @ Apple) have said that Uncle Steve wants to corner the content CREATION software market on one end, and slide into the enterprise intro to Unix network (read:migration from WinNT to Linux) on the other. Apple has a corps of evangelists from their marketing department that goes to these Fortune 500 companies and finds out what UNIX apps would make them buy a Mac network, then courts the developers to port to the Mac.
Microsoft would have left them to it, my friends said, if their DOJ woes took a bad turn. Now that the politicos are in their pocket$$, Microsoft is OFFICIALLY licensing Unix from SCO!
How's that for biting off the Mac platform!
 
How much did the Intel renderfarm cost Pixar? So far, they are splitting $320M dollars with Disney on Nemo. This will only go much higher when the DVD's go public.

I can't see that a renderfarm is more than a drop in the bucket, a tax write off, a business expense. Heck, they could probably donate it and write the whole thing off. Since I am not an accountant or knowledgeable about costs, I could be sooooo wrong. But come on! :p
 
Originally posted by kcmac
How much did the Intel renderfarm cost Pixar? So far, they are splitting $320M dollars with Disney on Nemo. This will only go much higher when the DVD's go public.

I can't see that a renderfarm is more than a drop in the bucket, a tax write off, a business expense. Heck, they could probably donate it and write the whole thing off. Since I am not an accountant or knowledgeable about costs, I could be sooooo wrong. But come on! :p

Not when one dual server can replace 4 Dells. You guys are making much about nothing. Pixar will rip out the whole render farm and put in a new one if it means they will double their productivity. They will not even blink an eye. What they care about is building the very best flick.

The G5 is an awesome processor, new generation and all, but what Pixar is really drooling over is the entire system. The dual 1GHz FSB , one each for each processor. 8GB of the fastest RAM, etc, etc. The G5 was built from the ground up to be a Pixar engineers dream machine. And for that matter a dream machine for anyone in the creative business.
 
Re: RenderMan Results?

Originally posted by fpnc
Seems like no one noticed or has yet to comment on the fact that the screen shot showing RenderMan results indicates that the 2.0 GHz G5 was only slightly faster than the Xeon running at 2.8GHz (also, I guess, a dual processor). What happened to the 2X advantage that Apple demoed at WWDC?

Actually, I think this is just closer to the truth. The G5 only brings Apple back to rough parity with the high-end Intel and AMD offerings. In any case, if Pixar is moving more of their production to PowerMac G5s then that is a very good thing for Apple and the "rest of us."

Read the whole report, this was only one particular test. Here is another quote from a Siggraph attendee:

Also of note, the Pixar booth was showing off Renderman on the new G5, rendering out film-res frames of Finding Nemo. The dual 2.0ghz G5 was rendering signifcantly faster than a dual 3.06ghz Xeon, which was interesting to see, and let's just leave it at that (please start another thread if you want to bash processors/OSes. Preferably on another web site entirely. Thanks!)

Search on Pixar to find the threads on the CGTALK.com site.
 
Re: Re: RenderMan Results?

Originally posted by peterh
While not as good as Apple hype it still is 20% faster. If Renderman scaled linearly with clock, a 3.2GHz Xenon would only be 14% faster. Still pretty impressive.

Well, this is a minor issue and we're only talking about a single benchmark result, but we should be accurate. The G5 was only 12% faster in the RenderMan benchmark, not 20%. Therefore, a 3.2 GHz Xeon would scale to being just slightly faster than the 2 GHz G5. However, I don't believe that Xeons are available at 3.2 GHz (they still max out at 3.06 GHz). The 3.2 GHz parts are Pentium 4's.

Do the math:

G5 @ 2.0 GHz: 216 seconds
Xeon @ 2.8 GHz: 246 seconds

(246 - 216)
--------------- = 0.12
246

or

(1.00 - 0.12) x 246 = 216

Nothing to get real excited about, but we should try to be accurate in our posts.
 
Re: Re: Re: RenderMan Results?

Originally posted by fpnc
Well, this is a minor issue and we're only talking about a single benchmark result, but we should be accurate. The G5 was only 12% faster in the RenderMan benchmark, not 20%.

While it is correct that the G5 uses 12% less seconds for the task, it is 14% faster:

G5 speed: N/216 work-units/sec.
Xeon speed: N/246 WU/s

set F=216*246

(246N/F-216N/F) / (N/246) = (30/F)*246 = 30/216 = 13,9 %

Originally posted by fpnc
Nothing to get real excited about, but we should try to be accurate in our posts.

:D
 
Re: Re: Re: Re: Re: RenderMan Results?

Originally posted by AndreHAL
Wow! :eek:
Why not just do one simple division:
246/216 = 13,9%

Very impressive though!

- because I wanted to make clear that you have to divide by 216 and not by 246... You have to divide by the Xeon, but as speed is the inverse of "number of seconds" you end up with 30/216 instead of 30/246...

And now, after this extremely important aside, back on topic... ;)
 
Originally posted by bluedalmatian
excuse me for being ignorant but what exactly is a blade server?

Typical servers are measured in height of units (1.75"). A thin server is now 1U (such as the XServe) which means that you can fit up to 48 of them in a rack (48U)--in practice this is much less because of cabling, room for routers/switches, KVM switches, and general inconvenience. Blade servers are around 0.5U (thus up to 192 processors per rack) by via vertical mounting, using some notebook components, removing some unnecessary components (like a graphics card and PCI expansion), and sharing systems such as keyboard/video/mouse, ethernet and fibrechannel switches, redundant power supplies, etc. I should add that the AGP graphics card is not needed for any "scale out" applications such as a renderfarm.

Right now the most impressive blade on the market is the IBM Bladecenter which uses Dual P4 XEON systems and 14 blades in a 7U enclosure. RLX (a spinoff of Compaq developers) was the first, followed by Compaq and HP and recently IBM, Dell and Sun (the last one is a very nice blade system and is also worth a look). I may be biased because we use the IBM Bladecenter at work. IBM and Intel have some general goal to standardize the components of blade servers across vendors which is another reason I'm partial to them.

If you look at the Bladecenter design, you'll see that its ventillation system may have been some influence on the PowerMac G5.

IBM has demo'd a bladecenter using the 970 (G5) blades running Linux and AIX (I assume no MacOS X) later this year. Note that since the BladeCenter is modular, you should be able to use these new blades "mix and match" within an existing blade center.

It's not likely that Apple will have an offering here since a blade gives up certain things like a graphics card which MacOS X relies on intensely and the 1U servers have dual use as a SOHO server and is the server of choice for SME.

By the way, to my knowledge, the Intel servers that Pixar uses are probably leased from Rackspace which is why you see all those Dells in photos of Pixars renderfarm. So Pixar can switch when the lease expires or if it has some out clause. I doubte they are using Dell blades as they are rather new and have a generally inferior design (P3, slow bus, RAM limitations), which, while fine for simple web applications, may have performance implications for renderfarms; I'm more inclined to believe the report that they use RackSaver servers and explain away the Dells in photographs as being what RackSpace is known to be partial to.

However, I don't see Pixar switching to Mac OS X for their renderfarm since the price/performance won't be able to keep up with a Linux on 970 (G5), or Linux on x86, Linux on Itanium, or Linux on whatever is best at the time. And yes, if this quote from Pixar's president of technology is to believed, Renderman uses ICC to compile, which auto-vectorizates for MMX/SSE. Apparently, IBM will be submitting something along those lines for acceptance into GCC for the 970/G5. It is suspect that Renderman on the G5 has been similarly optimized yet. Certainly Pixar will benefit if either GCC or Apple make such patches available.

Honestly ask yourself what a Mac gives you, and you see when you take away the GUI and ease-of-use (neither of which you need when you are going to remotely provision your application across 1000 CPUs), you can get the same thing on Linux without any licensing. It doesn't make good business sense to switch to OS X.

Note, I was talking about the renderfarm not their desktops. It is a foregone conclusion that Pixar is migrating to Mac OS X G5 desktop workstations or is this conference just a really big typo? Here, Mac OS X really solves a problem. That is simply good business sense.

I think it is generally understood that the folks at Pixar seem to be running things so smoothly that Jobs hardly ever shows up at his office there. I don't think he'd be demanding that they use Macs. After all, we are talking about a guy who used an IBM Thinkpad with NeXTSTEP/86 for years after becoming iCEO at Apple.
 
If I remember right Pixar rents the server farm they now have, from another company that sells them (not makes them). They did this because of the time crunch they had with Nemo. Well its nice to hear that Renderman is coming to X. Its also interesting that (I am sapposing here) that the G5 isn't running anything but 10.2.7 (the renderfarm at Pixar is using ICC? This might be verry interesting!) and renderman for X is still beta? If thats true then with Panther, some good altivec with new GCC and then some finalization of the software and this is going to be a nice year for Apple!
 
Re: Renderman

Originally posted by jaedreth
They did have to make minor alterations to it so it would recognize the hardware properly. I read an indepth article on this elsewhere. Forgot where. But basically, just enough changes to make it run right. No altivec or anything...

Jaedreth
Probably the Ars Technica interview with one of the PPC 970 engineers. They had to lie to the compiler to get decent performance since gcc just isn't built to handle the PPC 970's architecture.
 
Re: Re: would have been nice

Originally posted by Mr. Anderson
I'm guessing that the next one after The Incredibles will have more of a G5 and Apple influence. :D

D
Now that would be wonderful. Pixar's first post-Disney release (yeah!) made with Macs (yeah again!).
 
And that'll be...Toy Story 3? They've been itching to do that, but their Disney distribution deal doesn't make that a clever film for Pixar to make right now. But it'll be a sure-fire success, and it'll be made on the Mac.
 
Apologies on the Domain of depth thing, I was getting it confused with a Maya thing, and when I went back to check I realized my mistake. Ahh how cool the lighting effects in Shake are.

Where are these photos of the Dell renderfarm that Pixar is using?

Last time I saw (and admittedly it was a while ago) Pixar was using an all Sun renderfarm. I'll have to see if I can find that picture I have of it and post it.

If they have changed to an all intel renderfarm, then all that does is go to show that it really isn't that hard to do a platform/renderfarm shift.
 
The article seems to have some facts wrong. Pixar has several different platforms in use, and when they "switched" to Intel, they switched one application to linsux on Intel. They still sue Sun systems. Even after the switch the OS X, they will still have different platforms. If you look at dual CPU configurations, The USIII is faster in fp then the G5 at 2Ghz.
Base fp according to Apple for a dual 2GHz: 15.7
Sun Blade 2000 with one 1.2GHz USIII: 11.1
Sun Blade 2000 with two 1.2GHz USIII: 19.7
Sun Blade 2000 with two 1.05GHz USIII: 14.6

Pixar is going to use what is economically feasible, and not all of their equipment gets updated at the same time. They still use Sun equipment as well.
 
Re: Re: Pixar & Apple

Originally posted by agreenster
SGI has been dead for a while. It should be an easy take-over. Most studios are using Linux/x86 boxes for their workstations. Hopefully the tide will shift to the G5...starting with Pixar
Actually, SGI's biggest customer is the Department of Defense (the real DOD) ;) Anyway, I don't think they're in danger of going out of business any time soon. They fill a niche market that nobody else seems to do very well. Ever try to hook 16 graphics cards up to a single computer and make a huge video wall with each screen displaying different 3d modeled real-time data? You can say they are dead, antiquated dinosaurs of computers, or whatever. But let me remind you of another company a lot of pundits like to say is dead. It's a company run by Steve Jobs that starts with an A and ends with an e. ;) So before you go spouting your mouth off about companies that are dead, research the facts.
 
Originally posted by Mr. Anderson
Oh please let that not be true - that movie was total crap.

The FX were ok, but the story was crap, along with the acting.....blah

D

*OT*

Kinda killed the intent of the original Heinlein novel, in many ways. It's a good read, in a freaky kind of way.

I mean heck, in the book they had mech suits with nuclear bomb launchers and stuff, it wasn't like modern day GIs going against bugs one-on-one and getting totally mauled. In the book they were pretty much unstoppable, and that was kind of the point.
 
Re: Re: Renderman

Originally posted by nagromme

Any thorough, reputable, non-biased refutation of that fact--showing that Apple did NOT optimize gcc for their SPEC tests--would be a link I'd like to see, if anyone has a URL. (Otherwise, probably best to ignore this post rather than dig into the SPEC nonsense again!)

The two main points I've seen, are they turned off a feature called software pre-fetch (no idea what it does!), but they claimed it will be off for production as well, so it's valid for benchmarking.

The other is they used a special, single thread only version of malloc (the function used to allocate memory), according to Apple they haven't decided whether or not this will be used in production.

If anything, this all just shows the folly in benchmarking machines over two months before release. If Apple were "cheating", then how were their 2.0GHz benchmarks lower than IBM's 1.8GHz benchmarks?

Mike.
 
Re: Re: Renderman

Originally posted by nagromme
Regarding how the Wintel crowd see the Pixar change: many will indeed claim that Pixar is choosing a platform that will harm their business, just becuse their CEO cares more about his other company than about Pixar. Absurd, but I agree, it will be said.

Another point, Jobs has been CEO for both companies for a while now and they haven't made the switch, indeed they made a high-publicized switch to Intel from Sun. So if they make the switch now, in the light of the arrival of the G5's and OSX finally reaching maturity (in my opinion), it sends a more powerful signal.

Mike.
 
Re: Re: Re: Renderman

Originally posted by whooleytoo
The two main points I've seen, are they turned off a feature called software pre-fetch (no idea what it does!), but they claimed it will be off for production as well, so it's valid for benchmarking.

Read the interview with the PPC 970's chief designer. He talks about compiler problems.
 
Re: Re: Re: Pixar & Apple

Originally posted by illumin8
Actually, SGI's biggest customer is the Department of Defense (the real DOD) ;) Anyway, I don't think they're in danger of going out of business any time soon. They fill a niche market that nobody else seems to do very well. Ever try to hook 16 graphics cards up to a single computer and make a huge video wall with each screen displaying different 3d modeled real-time data? You can say they are dead, antiquated dinosaurs of computers, or whatever. But let me remind you of another company a lot of pundits like to say is dead. It's a company run by Steve Jobs that starts with an A and ends with an e. ;) So before you go spouting your mouth off about companies that are dead, research the facts.

SGI is also moving towards the Itanic, so they will just be another peecee manufacturer. It won't be hard for them to find a market; they should actually hurt the other peecee companies. If you want Itanic, you can go with a company that can build large systems and can do clustering, or you can go with someone that only does cluster setups and have little real world experience.

If you don't want Itanic, SGI will be out.

SGI is also much smaller then what it once was. Their current CPU is really showing its age and has not been updated for sometime now, just a faster in terms of MHz.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.