Pentium 4 vs Athlon 64 vs G5 Benchmarked

Status
Not open for further replies.
Even if these benchmarks aren't completely fair the athalon 64 is still faster than the G5.

Until people realize that speed isnt that important, and that overall effeciency plays a much bigger part in the computing experience, they wont truly be able to enjoy their Macintosh.

scem0
 
Originally posted by mac15
Ah well, it was bound to happen really. Wait till the G5 get Hyperthreading and jumps up a few hundred megahertz, then things will get interesting once again :D

I really doubt you'll ever see hyperthreading in PPC processors... since Apple likes to make $$$ off of dual processor systems, and it would boost the performance of a SP system too much.

And for all those people complaining about 128 MB vs 256 MB of graphics memory, its very negligable as benchmark scores go.
 
blah blah blah

blah blah blah amd intel blah blah blah g5 blah blah..

DO WE NOT LIKE MACS OR DO WE LIKE PCS i know what i like iam not a power user but i know id love a g5 over a pc any day no matter what speed..
 
If you want to see the best and most even handed test check out MacAddict Jan. 04' edition.
It bit a 2.0Ghz G5 with 1Gb of ram against the big Big 5 with 3.2 P4's and 1Gb of ram.

They ran all the same tests on both Mac and PC. They ran both Mac and PC benchmark software on both. The Mac benchmark company even commented that the one used in the test was optimized for the G4 not the G5 processor. This also what happen in the gaming side as well. MacAddict also did real world testing. Something their sister publication didn't do when they did their testing. They tested how long it takes to do all kinds of filters renders on a picture in Photoshop when MacAddict did how long it takes to import or export and render a 25-50meg file(something most everyone does with photoshop).

In the Mathematica 5.0 benchmark:

The P4 won but Mathematica said the software was not optimized for the G5 yet but the P4 was optimized.

In the Photoshop 7.0.1 app. benchmark:
The P4 one by 2 mins.

In the Photoshop 7.0.1 50mb benchmark:
The G5 wins by 8 seconds

In the Photoshop 7.0.1 25mb benchmark:
The G5 wins by 5 seconds

In the Indesign 2.0 export complex PDF benchmark:
The G5 loses by 4 seconds. Indesign is not currently optimized for the G5.

Quicktime 6.3 encode DV to MOV benchmark:
G5 wins by 4 mins

Bibble/MacBibble 3.1a Covert Raw to TIFF Benchmark:
G5 wins by 2 and half min.

Compressor vs. ProCoder 1.5 highspeed benchmark:
G5 loses by less then a min.

Compressor vs. ProCoder 1.5 highquality benchmark:
G5 wins by 8 mins

Unreal Tournment 03' benchmark:
Unreal won't be optimized for the G5 until the 04' edition is released.
P4 kills the G5 by 210 fps

Quake III: Arena benchmark:
Optimized for G5.
P4 and G5 both in a dead heat at 400 fps each.

Jedi Knight II benchmark:
Not optimized yet, but a update is said to be coming soon.
P4 beats the G5 by 50 fps.

MacAddict said most all the current games out not still do not take advantage of the G5. Once they have been optimized you see their performance match the P4 performance if not exceed it.

I wonder what PC mag will say when more apps come out optimized for the G5 next yr and we have atleast a 2.4 G5 by next Jan.??

Also the AMD 51-FX cost a bundle. I priced one recently with a MB and it was going to cost me basically a grand for the set and this was before I even bought ram for it.

Also when someone spends 3k on a computer $200 bucks IMO isn't going to make big difference in what I buy. Now maybe $500 or 600 bucks...well thats another story.
 
I wouldnt trust anything on an AMD or Mac site in terms of comparison. I thought the benchmarks on Barefeats was the most honest of them all.

http://barefeats.com/#quick

I'm more impressed with the FX-51 then the Opteron but, thats the only dual setup thats available with AMD64. When a mobo comes out that is able to use 2 chips for the AMD FX-51 then we will be on more even grounds then we are now. Since after all the FX-51 is the current flagship of AMD and not the Opteron just as the 2.0 Dual G5 is the flagship of Mac.
 
Unfortunately some of the bench's mentioned were games, and unfortunately this is a situation where its difficult to compare like with like.
Reason i would question a lot of results, most games so far have been sloppy ports of windows games. Only recently have i seen decent mac game ports. A lot of mac games have a long way to go with regard s to optimization for the platform, for example with including altivec optimizations. These same games which can leverage SSE and SSE2 on x86 could do great and much better on PPC by leveraging altivec better on PPC ( i am not mentioning G5 optimizations deliberately because not everyone with a mac has a G5 while i would imagine the vast majority of mac users buying a game would have altivec enabled processor). But a much wider problem is how we compare a game across two platforms like x86 and PPC. On the x86 most games presently are Direct X, which obviously Mac doesn't support natively in OS X. Then there is the question of driver quality. OpenGL driver quality is jumping in leaps and bounds on Mac platform. Only recently i have seen big FPS jumps with the 10.3.2 update. This was provided by better drivers and not the hardware. Unfortunately its difficult to make comparisons except to establish what we already know; the pc is a better gaming platform. I would imagine that the Mac drivers for Open GL are much better optimized for 3d apps like lightwave and Maya etc... than for games. I would not be surprised if they share a lot of their codebase with the FireGL on wintel side. The firegl exhibits similar performance to what the Mac sees in games and in 3d apps, i.e. its games fps do not exhibit the cards power, but pro 3d apps do.
As for the other apps. The thing to remember is that we are seeing very very very raw G5 results. pretty much worst case scenario in most apps being benched today.
neither the P4 nor the Opteron can claim this. Both can take extensive advantage of SSE2 and SSE in existing apps. As i already mentioned Apple and IBM have promised to work on adding autovectorisation to their compilers. This would make a huge difference to performance.
As for the rest of the architecture. The Power4 unlike the opteron does not in any way base itself off a prior generation. The Opteron is essentially a tweaked Athlon XP with ondie memory controller, SSE2 and x86-64. The FPU and Integer units are identical to that of athlon xp. So For 32 bit code which is all we can compare either opteron or G5 on today, the opteron is pretty much as optimized as it can get presently. Current 32 bit code tends to have a lot of Athlon optimizations and SSE2 P4 optimizations which the Opteron takes advantage of automatically. Its performance is supposed to be about 10% faster in 64 bit mode in linux tests.
Furthermore GCC3.3 provides a lot of P4 and Athlon 64 optimizations....such as autovectorisation for SSE , SSE2, 3dNow! etc... It still does NOT provide autovectorisation for altivec.
As for the G5, its extremely radical a design and extremely forward looking. Ars over at arstechnica noted in his summary of the 970 and Power 4+ processor that the chips would probably be ahead of compilers that would be needed to get these puppies to flex their mussles. He was prob right.
The G5 has dual integer and Fpu units (both complex). Currently GCC3.3 is not able to get both int and both fpu units running in parallel as was the design idea behind the chip. When both units are run synchronously in parallel the chip is operating as design intended and its theoretical performance per GHz is vastly superior to Opteron or Xeon at same GHz. Theoretically GCC3.3 is only able to allow the G5 run at 50% efficiency since it cannot generate code that runs the dual int and dual fpu units in parallel. Another future looking design of the G5 was the instruction scheduler. The G5 can have far more instructions in flight than the Xeon and Opteron and it can retire more instructions if need arises. This is a result of an incredibly advanced scheduler. Again this scheduler is supposed to keep the pipelines (wide and deep) filled. Currently GCC does not generate 'optimized' code that can do this. I put optimized in commas because while GCC can make certain optimizations it certainly does not generate code that flexes the processor properly. This is exactly in line with what Ars mentioned over at arstechnica. He was proven even more insightful when IBM released their XLC and XLF compilers for G5 as well as their 970 blade machines. These beta compilers in many cases saw integer code perform nearly twice as well as GCC3.3 code and floating point code perform 270% better than GCC3.3 code. Vector (altivec) code tends to run 70% better than GCC3.3 code.
This was discussed extensively in the ars forums and over at slashdot where the results were proven consistent among many testers on many applications. This is a beta compiler and performance should definately IMPROVE. Also this compiler still does autovectorise for altivec. IBM will be adding this feature especially since they are now using altivec in their blade servers.
So put it this way, in a worst case scenario the G5 is about as fast if not slightly faster in raw unoptimized form in majority of real world bench's.... whats the performance gonna be like when optimal compilers are used like XLF and XLC! just do a search on google of IBM XLC and XLF and you will see what i mean! ;) These puppies annihalate the competition.
 
So when are we going to see a new version of gcc and a final version of xlc that take advantage of all that the G5 offers? I hope we won't have to wait too long since every day we wait several applications are released using less than ideal compilers. Most games and mainstream professional apps compiled with gcc3.3 or earlier won't get the benefit of a recompile after they are released.
 
IBM's compiler is now in beta right now so I would say within next 3 months.

With the compiler software performance could see as much as a 50% speed boost.
 
Re: Re: blah blah blah

Originally posted by manitoubalck
Then your ignorance will be your undoing

I think you are ignorant one because his statement only proves he's loyal to Apple no matter what speed of processor it produces.

Speed doesn't matter to all Mac users.

I myself like speed, but the only thing I even use a PC for now is to play games on. The rest is done on a Mac.

So far my PC hasn't seen any action for the past 6 months so I may either just give it away or junk it.
 
yes and no. First and foremost, even in its most raw 'optimized ' state the G5 is probably as fast or faster than opteron and co on x86 side. In terms of compiler support it is coming. Remember this time round with the G5, IBM are using the same processor with the same bells and whistles liike altivec etc.. in their blade servers, so they will be keen to leverage the power as well as is evidence with their release of XLC and XLF beta. What is also very interesting is that they are vigorously supporting the G5 on apples part, by specifically including flags in the compiler for optimizations for ppc970, blade , Apple G5 (all the same so far!) and support for older PPC with altivec! Also IBM have said in their developer network and Apple have also said that both companies are actively working with GCC to get its optimization levels much much higher with GCC.
IBM and APPLE will add autovectorisation much sooner than later IMO to GCC , XLC and XLF. Its in their interest as well to do this since they are now selling products with a 'G5' in it ... namely their blade servers.
As for leveraging the advanced cpu schedulers for extreme paralellism, that will happen over time. But again i would reckon much sooner than later, partly because i had no idea that they had such an advanced 'compatible' compiler like XLC and XLF available so soon after the G5 launch.
Another thing to note is that, why can't your apps start getting better optimized now!! XLC and XLF are avail for download now free from IBM website, since OS X is essentially another flavour of unix why not try recompiling some of your own fav apps your self.
As for manufacturers adding G5 optimizations, again i reckon everything is pointing to a healthy sooner than later. Apple are now providing XCode with their OS for FREE. XCode itself is evolving and is an evolving development platform, which means that developers have free access to it. It also means that unlike a developer using Visual Studio.NET, he or she does not have to pay huge fees for the latest and greates version which includes all the latest compiler tunings for the latest processor families. It also means that developers have instant access to the latest optimizing techniques that apple is providing for the G5 in XCode.
Not that i want to run down GCC, it does add some G5 optimizations. The level of optimization and speed gain you get depends on whether you simply do a recompile with GCC3.3 or spend a little time doing specific processor tuning with the provided tools and by writing specific code.
Obviously though the highest level of GCC optimizations will not at all be on the same level as XLC or XLF. But i mention this hear so you know that there is probably much more performance to be squeezed out of current GCC3.3 'G5 Optimized' code!
On the other hand currently optimized code built with GCC3.3 tends to be faster than the opteron running x86 optimized code. Check the barefeats results for that.

With regard to games, I think that there could be huge performance gains if developers bother to tune for G5, but how many apple users have a G5 just to play games?! Will developers tune for this small minority?? I think apple's biggest problem in terms of games boils down to a few things, none of which really have anything to do with the processor or system architecture per say.
1). Most Mac games are sloppy ports of PC games.
2) Most PC games use the Direct X API, which is not supported under OSX. its a closed API.
3) I believe that Apple have tuned their video drivers (particularly in 10.3.2) for 3d pro apps more than 3d games. I seam to get much much higher performance with this update in 3d apps like MAYA than i do in games! In the windows world a 9800 pro is great for games and not so great for MAYA, while the same card branded FIRE GL by ATI; but which uses 3d power applications specific drivers is crap at games but great at the power apps like MAYA and Lightwave etc...

That said it will be interesting to see what a company like ID does with its DOOM 3 on the Mac since they have traditionally been one of the few staunch supporters of the latest MAC technology and a powerhouse like the G5 would thrive on an app like that. Also its written in Open GL and ID (in myopinion) do not traditionally make sloppy mac ports. They put the effort that they deserve into their games.

Regardless though the pc will stay the games leader in performance and variety. This particular battle will never really be won unless Apple takes a more aggressive stance to getting games developed on and for the mac but working with development houses better.

Anyway, enough ranting! Night all, and in case i don't hear or see from any of you over the next few days. Happy Christmas, i hope its a great one for you all and Santa brings lots of powerbooks, powermacs.
Rgs
i_wolf

P.S. guys sorry for the length. Just its not a simple two line answer. I got flamed once for replying with a big rant like that, but remember it take more effort and a lot more time to write back a bit thing like that that just two lines! :) so go easy!
 
Currently for the performance diffrence the dual 2.0 G5 is nothing spectacular. It most certainly isnt worth the price tag beig 3k at Apple.com with stock components and without a monitor. If your getting this G5 for personal home use and not professional then why? Is it stability? You can get a cheaper Mac for personal home usage if thats your fancy. When the G5 optimizations appear then things may change in the performance level but, as it currently stands theres nothing that would draw me in its direction when we're talking minor perrformance diffrences between it and the Amd64.

Lets talk about professional work.

I can do the same Profesional video editing with the PC at the same efficiency level and performance as the G5. Except I am benefiting in 1 thing greatly........ low cost. I also have the option to play MANY pc games although thats not my intent but, its still an added option which exceeds Mac.

Where is the Profesional 3d grafic animation field in Mac? Since Mac has its very limited compatabilitywith video cards then that leaves us with nothing in this field. Wildcat, Quadro, Fire, Oxygen are cards to name a few that wont work for Mac and they are leading in this field.

The only thing your doing when getting a Mac is paying ALOT of money for stability. Since Mac and PC can equally do apps as efficent as each other do you really want to pay that extra 1k just for stability? To tell you the truth Both OS X and XP are not flawless but, with a pc you have the ability to use Linux which is another option for stability to run the programs that are compatable for it. Linux crashes just like every other OS and is more stable then XP. Why not use Linux and stop comparing OS X with XP?

Finnal Word:
There just isnt anything on the Mac that I simply can not do on the PC that can efficently get the job done fast AND retains its quality.
 
Oblivion, good words and I tend to agree.

My next upgrde will come when the new BTX standard comes in arround april, with PCI Express graphics cards:D
 
my blah blah comment stems from the fact that i for one do not need or feel the need to have supper power systems for reading e mail surfing the web and playing the odd game. i understand that a fair few people around here use computers for more important things. I am not loyal to apple i happen to just be a fan of theres.
 
The performance of the G5 is only unspectacular if you look at silly macworld 'reviews' where blatant holes in the benchmarking process are obvious and where an honest benchmark does not take place.
With regard to your quest for vid editing. AFAIK the G5 and FCP are the only combination that can process 9 streams WITH effects on each stream and run them simultaneously without dropping frames!!!!
If you are in the film industry thats amazing. You cannot get that performance on any single app and chip combo in x86 land presently without dropping frames.
As for 3d work, well MAYA, Lightwave are a few products that are due for newer editions with G5 optimizations.
You are right though you cannot buy wildcat, Quadro etc... for the G5. However, Apple are using the 9800 pro which in all honesty and technicality is identical card to their FireGL pro graphics card. The only thing that differentiates the 9800 pro and FireGL on the windows side is price and 3d app tunes drivers. Thats it. More than likely IMO apple are using the 9800 name purely to attract people who would be aware of the 9800 and not the FireGL. Regardless they have a card in the machine which has just as much potential and for all intents and purposes is a FireGL. Apple have recently released newer drivers for their ATI graphics cards which provide huge increases in performance in different 3d apps. Personally, i would much much prefer to have an ATI card in my machine such as the FireGL than a Wildcat or Quadro. AFAIK, the FireGL is the only card that supports full OPENGL 2.0 compatibility , and its in apple's machine. I certainly would not knock them on that account.
I really believe that performance of the G5 is spectacular. Look at some of the very early optimized apps like FCP with its 9 streams with full effects. Thats incredible. Look then at completely unoptimized apps and see how well the G5 runs legacy apps; thats extremely impressive. It runs 30% more efficent per clock than the G4 according to IBM and Apple. Thats impressive. Its photoshop results are extremely impressive. Its weird though, if you took the results that barefeats had for Photoshop 7 with G5 plugin and compared against the opteron, the G5 would have won. However photoshop CS seams to be slightly slower than 7 on G5!But still its extremely fast. there are a lot of other apps out there that scream on G5 even in their raw upoptimized state.
As far as professional 3d graphic applications .... there is MAYA, LUXOLOGY, LIGHTWAVE, RENDERMAN to name a few.
The only thing your doing when getting a Mac is paying ALOT of money for stability. Since Mac and PC can equally do apps as efficent as each other do you really want to pay that extra 1k just for stability? To tell you the truth Both OS X and XP are not flawless but, with a pc you have the ability to use Linux which is another option for stability to run the programs that are compatable for it. Linux crashes just like every other OS and is more stable then XP. Why not use Linux and stop comparing OS X with XP?
Again i completely disagree. OS X and XP DO NOT run apps as efficient as each other. You can have many many more apps open at once on OS X running happily together because of its UNIX heritage with protected memory and multithreading, which XP cannot match. If you are a developer like myself who often has 20 apps running together at once you appreciate this. Yes I would pay more money for stability, because if one app crashes i don't want it to take all my other apps im working on with it. This happens every so often on XP. Especially when XP has a lot of intensive apps open together at once.
As far as running linux, yes that is an option on a G5 as well. But there are a number of problems with this. The same 3d pro apps that you are looking for such as for vid editing , photoshop, Lightwave etc... do not run on LInux. Sure there are quality open source alternatives but in some cases such as some of the professional 3d apps there really isn't any alternative. Incidentally the reason Linux sometimes crashes normally for me anyway, is X11, which IMO is a bloated collection of hacks and long in the tooth. It still doesn't support true alpha blending, transparency etc..
Quartz is rock solid on OS X. And is fully hardware accelerated. You also have the benefit that apple have an X11 layer that sits on top of it (instead of a full x11) that is hardware accelerated by quartz and allows you to run all your X based linux/unix apps in addition to your Microsoft Office, MAYA, Photoshop, Final Cut Pro. This is the ONLY platform that allows you to do this. And do it easily.

Final word: Actually use a G5 mac and see for yourself what im talking about!
 
"AFAIK the G5 and FCP are the only combination that can process 9 streams WITH effects on each stream and run them simultaneously without dropping frames!!!!"

Apple/FCP claims they don't drop frames. Surf around a few FCP boards and you'll see that dropped frames is one of the biggest problems with the app. Besides both Mac and PC loose to SGI Inferno.

9 streams with effects and no dropped frames? Nope

Try Flame or Inferno as a Real Compositor.

"As for 3d work, well MAYA, Lightwave are a few products that are due for newer editions with G5 optimizations."

Lightwave is old news. What portion of the film market does it command? Try running Renderman using Lightwave.... NO. It is nothing compared to Maya Unlimited, Houdini OR Softimage XSI. Lightwave is ancient, Why is Maya used as the Princible modeller for Pixar's movies and animation along with Inhouse tools?

Where is Maya Unlimited for OS X... it hasn't been released and no signs of life. Infact where is Hudini or XSI for Os X? They are both higher end then Maya.

"Personally, i would much much prefer to have an ATI card in my machine such as the FireGL than a Wildcat or Quadro. AFAIK, the FireGL is the only card that supports full OPENGL 2.0 compatibility"

FireGL is NOT fully Maya certified. Quitebuggy. Try oppening a million polly scene on the FireGL. Maya is a joke on FireGL and shouldn't be compared to Wildcat cards which are FULL OpenGL 2. Your comparing apples to oranges. Do a google on the performance diffrences.

"As far as running linux, yes that is an option on a G5 as well.

Do i sense a performance loss? Your also going to convince me that this is stable as well? Does your Linux have 64bit support... No. Im sure in time but, you cant escape this much performance loss. If you don't like OS X your screwed. If I don't like somthing in Linux I can change it. Hey what about IRIX. Is that also an option for MAC? LOL

"The same 3d pro apps that you are looking for such as for vid editing , photoshop, Lightwave etc... do not run on LInux."

Maya, XSI, Houdini --- all LINIX. Want me to name more? Recent articles in several sources & mags talk about crossover - win32 extensions for Linux running Office, Photoshop, etc with NO emulation.

As for the G5 in the professional field:

Who owns Pixar? Steve Jobs has a large interest in Pixar. If Apple and OS X are so great then why are XEON processors used in their rendering farms?

Apple just went 64bit. How long has SUN been?

What does Disney, Dreamworks, & ILM all run..... Maya Unlimited on Linux on AMD or Intel boxes.

Now a few questions for you i_wolf?

What films have you gotten credit for with your Mac?
What have you done professionaly?
Where is your experience gleaned from?
 
What films have i gotten credit for?... None personally. That does not mean that i have no knowledge of the vid editing world or do not have well regarded contacts. It would be elitist, snobby and ignorant to believe that because i do not work for a film house that i know nothing about vid editing as I'm sure you would agree. My experience is well founded however in my sister who works for RTE (the national broadcaster for television and radio in ireland) and is a parttime lecturer in DIT (dublin institute of technology) in the school of film and broadcasting (Aungier Street). She has taught me a lot of stuff about vid editing and compositing over the years. This has also enabled me to use some of the college equipment under her supervision from time to time when i was interested in learning about the latest and greatest.
I am also lucky to have a friend who works for MPC in london and is very knowledgeable of vid editing and compositing and a MASSIVE Final Cut Pro fan!. As for my experience with graphics packages well .. not quite based in film industry but gaming industry ... many years ago i did work experience in Havoc in Dublin. So i am familiar with a lot of the 3d packages there. Though a bit rusty.
Presently i'm a software developer working and learning something new every day with Math and encryption software.

So think of me as an extremely interested hobbyist always interesting is seeing whats new with vid editing and 3d modelling world!! :)

Before i answer the rest of your post; one thing concerns me that i would like to clear up oblivion. It would appear to me by directing those comments at the very end of your post that you were trying to say 'i have published work, i work for a film company, i have more experience'. None of which really establish the validity of the comments you or I made above; as such are meaningless. I'm sure you would agree that it's childish to get into a 'i work for x and have worked there for x days longer therefore i know more'. You don't need to compare job titles, companies to establish validity of data present. None of these questions you asked me establish the validity of my comments. Was this your intention or have i misread your intent. If so i apologise for misreading your closing comments and look forward to more debate with you... preferably without trading job titles, descriptions and rank inthe world! :)
Have a nice day.
Kind Regards,
i_wolf

P.S. I will reply to the rest of your post later, just wanted to clear up the closing comments.
 
I just checked out some of the reviews of the FireGL X2.... essentially the same hardware we find in the G5..... although importantly it is drivers you really pay for here.... i would imagine that apple are using ATI ref FireGL drivers for their 9800pro especially since with the recent 10.3.2 update perf improved greatly in 3d apps and games, but i found particularly in 3d apps.
Technically ATI's cards are nothing to be sniffed at. Most reviews i have seen have stated that the only thing that seams to be holding them back slightly is drivers that are not as good as the competition. while this is extremely important, most reviewers also note that ATI have come on leaps and bounds in performance with each revision of drivers. If i was doing pixel shading instensive rendering, I would much prefer to have an ATI or nVidia card in my machine than a WILCAT. They are simply more versatile. Thanks to the huge programmability of their pixel shading pipelines I would argue that they are more future proof for the price you pay. In terms of price / perf the ATI combo is definately hard to beat. I can well understand Apple's decision to go this route.... and i don't see it as any sort of problem that they didn't put an over priced, under specced WILDCAT VP in their machine. I would expect an ATI or nVidia card to easily match or exceed a WILDCAT offering when more mature drivers are available. Drivers for ATI are really improving at a very very fast rate. Especially when one looks at the rediculous prices charged by 3d labs. Further evidence of this was the high praise that a lot of high profile companies were lavishing on ATI's products at Sigraph this year. I think arguing that Apple don't have any quality 3d graphics offerings for workstations is unfair and clutching at straws.... assuming that Apple is using FireGL quality drivers for 3d Apps. Incidentally i havn't seen any reviews done with FireGL X2 and drivers of 19th November 2003 which are meant to be extremely fast.



Apple/FCP claims they don't drop frames. Surf around a few FCP boards and you'll see that dropped frames is one of the biggest problems with the app. Besides both Mac and PC loose to SGI Inferno.

9 streams with effects and no dropped frames? Nope

I have seen this done myself! Yes you can and do get 9 streams with effects on each stream... i tried it with color corection myself with some homemade DV just to be sure after I read your comment. Most people i saw who claimed that they weren't getting the full 9 streams were limited bytheir ram... in other words hard drive thrashing was causing a bottle neck and hence the reason for dropped frames. However if you provide adequate ram the potential is there in the beast to render 9 streams. There is absolutely NO x86 equiv that can come close to this level of productivity and efficiency in a software solution. The horsepower aint there. Now i know you can get break out boxes from AVID that would really be the way you would go if you could afford it but I am aguing the performance of the G5 in a software solution that is highly regarded and well used. And in similar situation nothing on x86 comes close. I even tested myself after your comments above to be 100% sure... i rendered 9 sample streams of the kids christening with full color correction on each stream. There were no dropped frames. I was at the 'power of panther' presentation where the the advertising agency who created the mini advert were present giving a demo of FCP with the G5 plugin... again in realtime they added effects to 9 streams and it rendered perfectly no dropped frames. So it can be done. There may be other mitigating factors as to why you have read that FCP can't do 9 streams with effects.
As for compositing ... while Shake may or may not be as high end as other solutions out there it is improving at a very very fast rate and currently both the combo of FCP , Shake and G5 do offer unparalleled performance at vid editing and compositing at a fraction of the price of a similarly performing solution.
As for SGI solution... well thats kind of a pointless example, since it is exponentially more expensive than any x86 or G5 solution. It aint even in the same ballpark in terms of price. Even still, having asked my sister, she said that she wouldn't touch the SGI solution, she would get a x86 or G5 with an AVID box instead. She told me that the avid/G5 or avid/x86 solution would be much cheaper and arguably as powerful. Still I/we really have gone off topic here bring SGI into the equation.
 
Lightwave is old news. What portion of the film market does it command? Try running Renderman using Lightwave.... NO. It is nothing compared to Maya Unlimited, Houdini OR Softimage XSI. Lightwave is ancient, Why is Maya used as the Princible modeller for Pixar's movies and animation along with Inhouse tools?

Where is Maya Unlimited for OS X... it hasn't been released and no signs of life. Infact where is Hudini or XSI for Os X? They are both higher end then Maya.

Lightwave is not old news. There is more than the film industry out there ;) The gaming industry and game design industry which i would have more experience with uses Lightwave regularly and it is a very good application. I believe that Lucasarts uses Lightwave extensively for many of its products like Jedi Academy etc... for modelling many of the characters. As such Lightwave is a very relevant app to be mentioning since it is used extensively by the gaming industry.
Renderman however is used extensively by the film industry and again that is another app that runs on the Mac. I'm told but dont have first hand experience that presently Renderman is much much faster on the G5 than the opteron or xeon equiv. Again though, you wanted example of 3d pro apps that Apple can say runs on os x and their machines... this is another one.
As for Maya, you are correct there is no Maya unlimited. At sigraph I believe rumors were circulating that Maya unlimited is coming sooner than later to the Mac platform. I would definately imagine that this will come to pass now that apple have a powerful Unix workstation on the market in the G5. However you are right, presently I cannot go into the shop and buy Maya unlimited but i can still buy maya complete. Maya complete is however marketed at the 3d pro workstation market by alias so it is relevant to what we are talking about here. The point being that you wanted examples of 3d apps which you claimed were non existant on the Mac platform. Seamingly they do exist :) Hudini and XSI... well i completely disagree about Houdini being more high end than any of the alias packages ... but on that note we will agree to differ. However on the XSI front ... yes you are correct it is not presently avail for Apple, but there are app's avail for apple that will do the job probably as well... It would be nice to see XSI on Apple G5 , but again I would imagine that this will come in time since apple now have some serious hardware and operating system to run these kinds of apps on... it is an extremely attractive platform.
There are other 3d apps from luxology , there are CAD apps like archicad, autocad etc.. which are all avail on Mac. Incidentally i don't fancy running Maya unlimited on winxp, or win 2k. I have found that they are resource hogs on winxp and 2k, forget multitasking reliably while working with a big render. Forget stability. Ok .. then... you were saying that you could move to linux and use the linux versions of these apps. Thats a good idea. I use Gentoo myself on dual Xeons at home and its a fantastic OS, however driver support for most of these pro graphics cards you mentioned is horrible under linux. Add to that, Xserver is old, extremely buggy and fairly bloated. It still doesn't support true alpha blending or transparancies in the OS or apps, these types of effects have to be done in software. Most of the apps you mentioned also tend to run faster in windows than linux due to driver support prob. I don't know how often the command 'start x' gets typed into the terminal while im using any type of 3d app on linux. X aint well known for its reliability. I would much much prefer to be using a 3d app on top of a UNIX OS like OS X which has at its core hardware acceleration in QUARTZ EXTREME that is extremely fast AND reliable. If i need X apps to run i can in panther and they get hardware accelerated and X is extremely stable on OS X. This would definately be a platform of choice for 3d apps IMO. It also appears that I am not alone in this when you consider the high number of apps that are being ported to OS X on a daily basis.



FireGL is NOT fully Maya certified. Quitebuggy. Try oppening a million polly scene on the FireGL. Maya is a joke on FireGL and shouldn't be compared to Wildcat cards which are FULL OpenGL 2. Your comparing apples to oranges. Do a google on the performance diffrences.

Correction the FireGL is certified with the drivers i mentioned above. 19th of November 2003. MAYA 5 certified. Also I asked a mate of mine with this card what the perf is like in 3d S MAX , MAYA etc.. on x86 .. he said that the perf improvements is considerable and raved incoherently about the price perf of the card. Feel free to take this with a grain of salt until you test yourself however... i would.
Anyway this is kina pointless because this is x86 platform ... what does matter is whether MAYA on Mac runs smoothly with Apples '9800pro'. Tried , tested.. it does! :) I took your advice on the google search... I found a lot of rave reviews for the X2 FIRE GL. However like most new hardware its performance will only really be realised with more mature drivers. I couldn't find anything on the new drivers from ATI which according to ATI provide substantial perf improvements. These drivers are also supposed to fix the problem with high poly scenes where performance would suddently drop off. I would argue to you that the FireGL is a much much more programmable card than the WILDCAT VP. It is also more than half the price of the wildcat.... and is extremely competetive. Assuming that Apple are using the FireGL standard drivers with OS X then this was a very very wise choice of hardware on their behalf. One thing i would like to point out is that while the wildcat Vp supports OPEN GL 2 ,it does not implement all of its functionality in hardware. The ATI and nVidia cards implement more of the OPEN GL 2 standard in hardware than the VP from 3dlabs. They are also much more programmable and have more flexible pipelines than the VP tech, so more so in future apps their performance should improve. Worth noting i believe.

Do i sense a performance loss? Your also going to convince me that this is stable as well? Does your Linux have 64bit support... No. Im sure in time but, you cant escape this much performance loss. If you don't like OS X your screwed. If I don't like somthing in Linux I can change it. Hey what about IRIX. Is that also an option for MAC? LOL

What exactly are you talking about. I mentioned that you can run linux on G5 platform if you want as well. There is flavour from gentoo, yellowdog linux etc.. presently but they are very beta at present. The G5 is a radically new platform... what do you expect, it will take some time for them to improve. What exactly do you want to change about OS X??? What exactly is your point? Why would you want to, its extremely well integrated with the hardware and both run happily together. Are you talking about building it from scratch like you can with Gentoo?? You can i think, if you don't want the graphics interface... you can download the source kernel from the darwin project and then build around that if you want to? Is this what you are talking about? If there is something that you don't like... why dont you join the darwin project and make some constructive submissions and suggestions to the open source community and apple there who developed OSX. I would emphasise the use of constructive criticisms here because it woudl appear to me that you are desperately looking for things to use as leverage for criticising OSX and the G5 unfairly and without proper research.Personally i love building , tweaking etc... on linux. I run Gentoo on PC and will dual boot my G5 when Gentoo is more mature. I have found that i havn't found a need to alter anything fundamental yet in OS X. What part of OS X or UNIX/LINUX do you need to alter?? I have my custom shell scripts and i have built some of the X apps that i needed with GCC 3.3 and some with XLC and XLF. I also have access to the darwin ports and fink ports collection for when im lazy! What exactly is you problem with OSX?
"The same 3d pro apps that you are looking for such as for vid editing , photoshop, Lightwave etc... do not run on LInux."

Maya, XSI, Houdini --- all LINIX. Want me to name more? Recent articles in several sources & mags talk about crossover - win32 extensions for Linux running Office, Photoshop, etc with NO emulation.
I am well aware that there is a LINUX version available. However as i already stated above you sacrifice speed for reliability with the Linux platform (bar the SGI option)... and the biggest problem with Graphical apps like the ones you mentioned on Linux is the X Server. The point i was making was that there are even fewer commercial pro apps like the onces you mentioned for vid editing , 3d animation, modelling, rendering available for linux than there is for OS X. Please don't insult my intelligence by arguing for the sake of arguing that SGI Irix runs a lot of commercial vid editing apps... these are for IRIX only and made by SGI to run on their OS and their hardware; hardware that is exponentially more expensive than that made by apple and wintel. Is there a vid editing and compositing software that is commerically available for a typical linux x86 or PPC workstation??? Please tell me if there is i would love to find it. I am also aware of the WINE project and running some apps like office on linux. However it still stands that OS X is the only UNIX platform that has native MS Office support. And has native support for (like it or not) MS standards like WMP etc..
As far as emulation of windows goes well there is Virtua pc 7 to be released in jan with G5 support.
 
Here are benchmarks proving that the high end FireGl is nothing compared to the competition of the other Pro Grafics cards.

http://www20.tomshardware.com/graphic/20030123/opengl_nv28_fgl9700-13.html <--- this is a hair old but still shows the FireGL 128mb version to be slow.

http://www.3dchips.net/content/review.php?id=63&page=18 <---- These are the benchmarks that count since Maya is involved. These tests were on an intel sys.

http://www.3dchips.net/content/review.php?id=63&page=17 <----- Once again with Maya. Same tests except done on AMD sys.

You wont get good maya results with a FireGL

Here is another from Tom demonstrating Maya with the FireGL X2 256.

http://www20.tomshardware.com/graphic/20030916/opengl-19.html

http://www.xbitlabs.com/articles/video/print/3dsmax5-quadrofx3000.html

I think you and your friend are too enthused with the Mac market that you are soo blinded by the truth that you will believe in anything even when evidence has been provided. As I said before... FireGL is a joke in Maya.

If your going to use your sisters Television company that she works for as a grounds to prove your point about Lightwave then just stop. Lightwave is mediocre in quality and ANCIENT compared to the programs that I mentioned before. zzzzzz

Show me proof where Jedi Academy was all made using Lightwave?

In this field you are GIVEN the hardware of your choice meanning you can choose any video card you want along with any system. Is it any wonder why Return Of the King was made on 220 linux systems, 125 SGI, 50 WinNt, 15 Macs? Oh your probably going to say somthing to counter this. Well I'll provide proof.

http://www.theonering.net/perl/newsview/8/1047582857

If your about best performance for the price then yea I will take either 9800 pro256mb and 128mb over the other pro cards and oc it but, you dont get that luxury with Mac. Not that isnt substancial grounds to say whats better then what while ocing but, I have that extra option just like the added options in gamming with a pc over Mac. Who cares about that.

I would lastly like to point out that the 9800 pro 128mb is OLD and thats the best the G5 has to offer. This is the problem with Mac. Since there is no variety in hardware then whatever you do have within the system is already dated. Since time is worth more then money then I would rather have more open options in the professional market.
 
The difference in benchmarks between the various processors indicates that some are slower than others, but they are all too slow.
For example, run a radial blur on a 300dpi tabloid document; none of the CPU's tested can do it in realtime.
ie: you take a shorter coffee-break with the fastest CPU's, but it still amounts to a coffee-break.

10 years from now, this obsession with benchmarking will seem very quaint, and totally irrelevant to the most important issues at hand:
software design and human interface engineering.
In this respect, Apple is way ahead of the industry, and their hardware is certainly fast enough to run current software as effectively as anything else out there.

BTW, IMO Apple should include at least the QuadroFX 1000 as a BTO option for running hardware overlays in programs like Maya.
 
Originally posted by Mr. Anderson
But this isn't all that bad - imagine if we didn't have a G5 and were still stuck with the G4s.....;)

At least now we're more competitive....

D

Words of wisdome from our communities most prolific poster.

Q: DOSE IT REALLY MATTER.

A: NO, since this thread has been posted somany times, the debate has raged on since 20 years ago and until the MAC OS is avaliable on x86, or Windows avaliable on PPC, then the debate wont be settled.

Only true benchmark at this time is on Linux 64, using server based apps which have little influnence on the consumer market.


The Sega Dreamcast (released in 1998) runs a 200MHz NEC RISC cpu, has a 128bit graphics core and still plays games better than the latest and greatest desktop computers. If you don't believe me play sonic 2 on it.

So the point is that it's horses for courses, pick the right tool for the job, Blah blah blah...
 
"The Sega Dreamcast (released in 1998) runs a 200MHz NEC RISC cpu, has a 128bit graphics core and still plays games better than the latest and greatest desktop computers. If you don't believe me play sonic 2 on it."

You're getting Color Depth and Bus Width mixed up.

1) Voodoo Banshee is capable of only 16-bit color, but the memory bus is 128-bit.

2) Voodoo5 5500 is capable of 32-bit color, but the memory bus is 256-bit (dual processors with 128-bit per graphics chip).

3) The most common PC video card bus widths now are 64-bit, 128-bit, and 256-bit.

4) 128-bit color for PC video cards has been out a year or two, but 128-bit video memory bus has been around for many years.

Let's run thru this "console vs PC" thing. 1) You are a game coder and you have to do a game for the X-Box. You have one platform of hardware that will not change at all until a new console version is released. Your graphics hardware and driver are a given. Set in stone. 2) You are a game coder and you have to code for at least major video chipsets and your code must be able to scale from the slower than average PC to the middle of the road PC and then be able to take advantage of the best PC hardware possible and be able to do all this without having a locked in video chipset as your target platform

Lets also not forget the fact that consoles are 640x480. Run any pc game at that res and see how bad it looks compared to a higher res.
 
Originally posted by oldschool
Who cares? I don't think anybody here bought an apple because it was faster.
Exactly.. If all we cared about was speed, we'd all be weenies. Just like dem utter folks on dah dark side. Dem boys like speed and blue screens. y'nah... Just can't figure dem out.

Not that we want to be left behind, but that's not the number one reason.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.
Back
Top