Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I have to agree with the guys talking about the blade servers etc. Steve Jobs said Apple is sticking with PPC last year.

This is pixar, and as they said, these are humongous servers, with many many chips, and apple doesn't even offer anything in this area.

Especially blade servers.

Yes, Sun systems are great. Especially for databases. BUT, the big thing is cost. It's one reason when we do oracle systems now unless they are just huge we use linux on intel. The fact that linux on intel is very very very cost effictive in the server environment.

sheesh.

*waits for more idiotic marklar and mosx on intel remarks*
 
Originally posted by MacCoaster
Kernel design has *NOTHING* to do relative with which processor is right for which kernel. Kernel design is generally a philosophy of an OS' design. In fact, Mac OS X's microkernel--Mach--was developed on x86. It is being used in GNU/Hurd as well. It's just a design philosophy. Microkernels are best for Mac OS X because of the way it is implemented, but it might not be the best for Linux. Different strokes for different people/needs.

Well most OS vendors have shied away from the microkernel design concept/philosophy due to performance latencies:

The Microkernel Experiment is Going On

skip down to 'Microkernel Vs Macro Kernel' in the next link:

MorphOS in Detail

Of course this concept won't go away, but until systems are optimized to take advantage of this approach, we're still reliant on the best of both worlds!
 
PR

OK neat! :mad:

But if this is true. Then why don't Pixar and Intel write a pressrelease about it? I mean it's rather big news if it is true.

That's my two cent's anyway.
____________________________________
If it is true... then I will switch... to SUN.
 
another take on this

OK, I took some time to think more about this. If this is true then Steve has to answer to Apples shareholders. And that won't be funny for him.

So, perhaps this is a way for some guy/guys to make the Apple stock fall. I mean it obvious that news like this will make the Apple stock hurt and it's no news that Apple has a lot of products in there product pipeline for 2003.

So, I think this story is made by some idiot who is trying to make some big $$.

This isn't the first time I have seen this type of thing happening. There are this guy who writes for a big newspaper in the US that wrote a lot of negative things about Apple. He was one of the first to write that Apple probably would change to Intel (or AMD) after the Q1 last year. That guy owns a lot of Intel stock so he tried to get some $$. I think this is really lame. If this is true I hope those involved will be sentenced for 20 years of boredom.
 
this doesnt surprise me at all. i see future happening with intel and apple because intel owes apple a lot of respect. if it wasnt for apple, usb wouldnt be as big today in my opinion.

iJon
 
Apple is irrelevant, Pixar is relevant

Originally posted by cevin
If this is true then Steve has to answer to Apples shareholders.

Oh, it's true. Do a web search for "pixar linux", and you'll see lots of hits describing Pixar porting their software to Linux, buying IBM P4 workstations,....

Steve has to answer to Pixar's shareholders, not Apple's.

He's probably taken himself completely out of the loop on their hardware purchases - too much risk of conflict of interest. He probably can't do anything more than ensure that Pixar is treated like any other top tier Apple customer - same discounts, same access to non-disclosure info about future products.... Any special treatment, and Apple's shareholders could sue.

One thing you can probably deduce from this news, though, is not to expect any Pentium-toasting IBM 970-based Xserves by mid-autumn (N.H.). If they were that close, and they were that powerful, Pixar might have waited.
 
Re: Apple is irrelevant, Pixar is relevant

Originally posted by AidenShaw


One thing you can probably deduce from this news, though, is not to expect any Pentium-toasting IBM 970-based Xserves by mid-autumn (N.H.). If they were that close, and they were that powerful, Pixar might have waited.
I wouldn't speculate like that because even if there are 970's in the works would pixar know about them. And if I'm not mistaken there are no Xserves that could meet the specsw that Pixar purchased (1024 processors in 8 servers)
 
Blade Servers

I found Fujitsu can put 20 blades in a 3U case, and have 2 P3 based Xeons per blade! Thats crazy. that would be 40 times 14 processors per rack, or 660 processors per rack total!! Xserve can't come close. 84 processors. Oh wow. I love Apple, but Unless Pixar wants to buy some new tracts of land for more buildings, Apples not the right company for them at the moment.:( OK, they wouldn't need a new building, but its 2 racks versus 13 racks, and racks are like 500 a piece. OK, thats not a whole lot (its the cost of 2 single processor Xserves) but still, its cooler to point at two filecabinet-like boxes and say that they rendered "Finding Nemo" than to point at a room and say the same thing.
 
Re: Apple is irrelevant, Pixar is relevant

Originally posted by AidenShaw


Oh, it's true. Do a web search for "pixar linux", and you'll see lots of hits describing Pixar porting their software to Linux, buying IBM P4 workstations,....

oh, damn.

Any special treatment, and Apple's shareholders could sue.

that's true.

One thing you can probably deduce from this news, though, is not to expect any Pentium-toasting IBM 970-based Xserves by mid-autumn (N.H.). If they were that close, and they were that powerful, Pixar might have waited.

I can't see why it could be so. As you wrote
Any special treatment, and Apple's shareholders could sue.
so that could be the case if it were the other way around. I mean that Apple sold XServes to Pixar then Apple would show Pixar special favors and that would be a problem of intrest instead.

So due to NDA Apple could not give Pixar any special favors and by that Pixar had to go the way they went.

And due to NDA we don't know if there will be a Pentium-toasting IBM 970-based Xserves.

so it's a bit of catch 22.
 
just the *same* treatment

Originally posted by cevin
So due to NDA Apple could not give Pixar any special favors and by that Pixar had to go the way they went.

I am assuming that Apple in fact does give some large customers NDA presentations.

For example, note: http://maccentral.macworld.com/news/0205/25.henrico.php

Henrico County was the centerpiece of Apple's launch of the new iBook when the remodeled consumer laptop was first unveiled a year ago. On the same day Apple introduced the new iBook, it announced Henrico County had signed up to buy more than 20,000 of them.

It seems pretty obvious that the Henrico school system must have had some NDA information if the placed an order for 20,000 before the new iBook was announced!

So, if Apple is discussing the 970 with some companies, then discussing it at the same level with Pixar would be OK. It's not a Catch-22.
 
Originally posted by AidenShaw


But that same 64-bit CPU (the one with enough bandwidth) will be faster on most programs if they're compiled in 32-bit mode than if they are in 64-bit mode.

64-bit pointers use more memory, more bandwidth, and more cache. If you don't need the extra address space, you are better off with 32-bit.

And, if Sun hardware is so fast, why is Pixar moving from 64-bit Sun boxes to 32-bit Pentium 4 systems?

Following this logic we should all just go back to 8 bit. Give me a break!
 
Originally posted by sedarby
Following this logic we should all just go back to 8 bit. Give me a break!

:) LOL

That's a bit extreme, and not the point.

The point is that most programs fit in the 2GB to 4GB of virtual address space that a 32-bit program give you. Few would fit in a 16-bit or 8-bit virtual address space.

For those programs that are happy in 32-bits, they won't magically run faster if they're recompiled for 64-bits. They might run marginally slower, especially if they use lots of pointers. If you need more than 2GB of memory per process, 64-bits is wonderful and necessary. A few applications might benefit from native 64-bit integers even though they don't need > 2GB.

There are some really fast 64-bit processors (Alpha, PA-RISC, POWER4, Itanium, ...), but they are fast processors that happen to be 64-bit - they are not fast just because they are 64-bit. The P4 and G4 have internal datapaths up to 256-bits wide today - 64-bit addressing isn't needed for wider paths for data.

There's a "bit myth" developing that people think that the 970 will suddenly vault Apple into some new dimension of 64-bits. Ha.

If the 970 is used by Apple, it'll be a big step back to performance parity with the Pentium 4. It'll also be used as a 32-bit chip running 32-bit applications - we're not going to see OSX rewritten for 64-bits this summer, and we're not going to see all the apps rewritten for 64-bits.
 
Originally posted by AidenShaw


:) LOL

That's a bit extreme, and not the point.

The point is that most programs fit in the 2GB to 4GB of virtual address space that a 32-bit program give you. Few would fit in a 16-bit or 8-bit virtual address space.

For those programs that are happy in 32-bits, they won't magically run faster if they're recompiled for 64-bits. They might run marginally slower, especially if they use lots of pointers. If you need more than 2GB of memory per process, 64-bits is wonderful and necessary. A few applications might benefit from native 64-bit integers even though they don't need > 2GB.

There are some really fast 64-bit processors (Alpha, PA-RISC, POWER4, Itanium, ...), but they are fast processors that happen to be 64-bit - they are not fast just because they are 64-bit. The P4 and G4 have internal datapaths up to 256-bits wide today - 64-bit addressing isn't needed for wider paths for data.

There's a "bit myth" developing that people think that the 970 will suddenly vault Apple into some new dimension of 64-bits. Ha.

If the 970 is used by Apple, it'll be a big step back to performance parity with the Pentium 4. It'll also be used as a 32-bit chip running 32-bit applications - we're not going to see OSX rewritten for 64-bits this summer, and we're not going to see all the apps rewritten for 64-bits.

From an application program point of view, 64bit CPUs won't make a difference in terms of speed. I think basically where the confusion starts is that a 64bit CPU (even running at the same clockspeed) won't be faster than a 32bit CPU and that is basically true from a technical mindset... but from a totally data (not program) point of view, addressing more data per operation will increase overall performance why not actually being any faster CPU speed wise! That is why 64bit servers are so popular, their whole function is to move data from point A to point B swiftly. Databases come to mind when speaking of this scenario! If this wasn't the case, then there would be no incentive to even market/develop 64bit CPU technology to address the shortcomings of 32bit CPU technology!!

;)
 
Just try to find 32-bit UNIX any more....

Originally posted by AmigaMac
addressing more data per operation will increase overall performance why not actually being any faster CPU speed wise!
...
That is why 64bit servers are so popular, their whole function is to move data from point A to point B swiftly. Databases come to mind when speaking of this scenario!

But 64-bit CPUs do not address more data per operation, with the sole exception of having a native 64-bit integer data that few (if any) 32-bit CPUs have!

In a 64-bit CPU (assuming C/C++ with the prevalent LP64 programming model), a short is still 16-bits, an int is 32-bits, floats and doubles are 32 and 64 bits - same as it ever was. With the single exception of 64-bit integers, nothing has changed.

Why is 64-bit good for databases? One of the main reasons is that one can cache multi-GB of the database in main memory on a 64-bit machine. Memory is much faster than disk - especially for the indices and other structures in a database.

Why are 64-bit UNIX servers so popular? Could it be because hardly anyone is making 32-bit UNIX anymore?

IBM's lineup is all 64-bit, except for two entry level single CPU machines using a PPC 604e. SUN is all 64-bit, except for the single Intel-powered model. HP sells 64-bit HP-UX on PA-RISC and Itanium, 64-bit OpenVMS (not UNIX), Tru64 UNIX and Linux on Alpha, 64-bit Windows and 64-bit Linux on Itanium.

All the big, fast iron is 64-bit, and has been for years.
 
Re: Just try to find 32-bit UNIX any more....

Originally posted by AidenShaw


But 64-bit CPUs do not address more data per operation, with the sole exception of having a native 64-bit integer data that few (if any) 32-bit CPUs have!

In a 64-bit CPU (assuming C/C++ with the prevalent LP64 programming model), a short is still 16-bits, an int is 32-bits, floats and doubles are 32 and 64 bits - same as it ever was. With the single exception of 64-bit integers, nothing has changed.

Why is 64-bit good for databases? One of the main reasons is that one can cache multi-GB of the database in main memory on a 64-bit machine. Memory is much faster than disk - especially for the indices and other structures in a database.

Why are 64-bit UNIX servers so popular? Could it be because hardly anyone is making 32-bit UNIX anymore?

All the big, fast iron is 64-bit, and has been for years.

You seem to be conflicting with yourself... being able to address more memory is a performance advantage, especially if the architecture supports it! Integer is one of the important ingredients of computation! The less read-write-swap you do the better!

And why would any company waste their time with 64bit if there wasn't any advantage point?! Seems pretty moot to me!

The last sentence sums it up quite well, which makes my point clear!

:rolleyes:
 
Re: Re: Just try to find 32-bit UNIX any more....

Originally posted by AmigaMac
You seem to be conflicting with yourself... being able to address more memory is a performance advantage,

You missed the "per operation" statment of my reply - it was directed at an earlier post.

Big databases need 64-bit addressing to keep huge caches in memory, that's a primary reason why big iron is 64-bit.

If someone has a desktop machine with 512MB to 2GB of RAM, it's very unlikely that there will be any advantage from 64-bits. If one has 2GB and has run out of memory room, then 64-bits may help (or a 32-bit system with 32GB or 64GB of RAM may be enough).

Remember what this story is all about - Pixar moving from a 64-bit system to a 32-bit system for speed.
 
What does this say/not say about the new Xserve RAID?

As much as this steams me, I have to admit that Pixar is doing the right thing.
(Caution: some b!+ching and complaining...)

It makes sense that Pixar went with Intel, even if I dislike the idea of them going to anyone other than Apple. Time is definitely money for film deadlines. So the idea that they would go with a slower machine (Apple's Xserve) does not add up at all.

The almighty buck is what matters here, so, if Linux is on an Intel server, fine. Just hope Apple gets going with faster processors (SOOON!!!!) so we can stop feeling like a snail...

I love Apple's products. I hate the waiting around. If Intel gives a better performance with their servers, then so be it. I will still be loyal to Apple's powerbooks, and desktops, but speed counts.

If you had to bring a film production on/under budget and that budget was $120 million for a calendar year of work, I think you'd want every second to count. I don't know how much Finding Nemo cost, but if you can't control the time tables because you have slow machines, then shoot, what's the point of having the slower equipment? Loyalty to Apple would (and is) drying up amongst a lot of Hollywood render farms as Apple fails to take care of this situation.

Having a lot of rendering software ported over to OSX is just as important as having darn fast server product. Hopefully, this too will be addressed.

Filmmakers need an inexpensive solution for rendering both non-linear files (FCP) and 3D animation and composite work. Apple's got it right giving us iLife, now we need an "iWork" workhorse solution for all our rendering needs.


:mad:
 
Steve Jobs

On a kind of related note:

I recently listened to an audio book (listened to it on my iPod!) named "The Second Coming of Steve Jobs."

This book does a great job of describing Job's life from the NeXT debacle, to his resurgance at Apple, and ends in the year 2000. It also goes in depth into Pixar. Describing its beginnings, the people, when Jobs bought the group from George Lucas, the first Disney deal, etc.

One very interesting thing you discover from the book is how little Steve Jobs has to do with the day to day running of Pixar even after the success with Disney and taking the company public. Saying Steve Jobs is in charge of Pixar is like saying that the Queen is in charge England. At Apple, Jobs is without question large and in charge. At Pixar however, Jobs has mostly been in the role of figurehead and financier. He tried over and over through the years to implement his policies and philosophies at Pixar to no avail. Ed Catmull and John Lasseter run Pixar.

So in my opinion making assumptions that the things Pixar does is somehow foretelling of future directions at Apple, or vice versa, is wrong. I think they are two VERY separate companies.
 
Off my original topic, but worthy of note...

Originally posted by dricci
However, don't forget that LoopRumors had a fake Photo. While their story may have have been (somewhat) true, the photo was still of a big Apple logo (that was obviously photoshoped on).

Um, you might want to check out the post from Sunday, Feb. 9th at their site.

"We contacted the source who since sent in more photos to prove the original image was not altered. I've resized the images below so you can pull them onto your desktop and open them for a closer view, and for those who are still not convinced, there's a soundclip..."

http://www.looprumors.com/
 
Further addendums to microkernel comments

Originally posted by AmigaMac
Furthermore, context switching on the x86 architecture is slow (compared to PowerPC) and is purely the wrong platform for OSes that use a Microkernel, which OS X does use!
And to further prove you wrong, Windows NT and its derived OSes' kernels are in fact a microkernel [source]. Windows XP is plenty responsive on an 1 GHz Athlon. Mac OS X is okay on a single 1 GHz G4, but that is mostly due to Quartz, not the microkernel design decision.

Again, microkernels has NOTHING to do with what processors the microkernels are best run on, just the latency it creates doex exist on all processors.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.