Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
For Pitys Sake

This is meant to be a thread about benchmarks. There are a few people spouting their own little world.

I don't care.

Firstly get us some real benchmarks that mean something to the average user.
Photoshop is good, general apps are good.
How about game FPS, always a good indicator.

The main reason I buy a mac - The OS.
Of course they often look good and have great integration with other devices and apps but that is because they control the CPU design and the OS design.

I don't use Linux because I don't want to. I love the simplicity and look of OSX. Linux is a great OS from a tech point of view but as a user it can be a ballache I don't need.

The G5 will be quicker simple as that really.
Apple are doing just fine, I have never seen such a strong product line in all my years with apple, I am optimistic.
 
...my issue is not that the G5 is a bad machine...nor do I believe the G4 is faster (although...for now...many synthetic tests paint a differing picture)...when the G5 finally gets an OS (with more than a few optimised libraries) that is written specifically to address it's unique archetecture...and when apps are finally coded to take advantage of the PPC970...it'll be one of the fastest PC's available...BUT as I've stated...ad nauseum...I bought a G4 (to replace a dead G3)...and I found the G4 to be a much better "Fit" for what I wanted/needed in a desktop machine...and seeing that I paid $150 under MSRP it gave me more than enough money to add a few minor upgrades...I'm not defending a purchase...I'm not bashing the G5...the G4 (for what I paid...and the cost of my extras) was more fiscally responsible and had more "bang for the buck"...



... [/B][/QUOTE]
i agree that it will take about a year or two for the g5 to come into its own and apple ships all powermacs with duals, until then the dual 1.25 amy be the best buy for many users as apple still makes money anyway and all rthose dual 1.25 users will eventually move up to dual g5 in the future so apple gets 2 sales off of them.
 
Originally posted by John Q Public
all those tests were performed in 2000-2001...because there is no PC200 SDR... and you can't perform a fair or accurate comparison between PC3200 and PC100 or PC133...unless you're trying to prove how far chipsets and processors have progressed in the last couple years.

This is where you are wrong. Your references do not prove or even suggest that there is only a 3-5% difference between DDR266 and PC133. Think scientifically--you are not doing a controlled experiment. The cause of the low bandwidth could be either the chipset or the memory.

So you have proven nothing. Now, what reason do you have to suspect that one RAM module that is accessed on both the up and down of the clock doesn't have double the throughput of another module that is accessed only once per clock?
 
Re: Re: Re: Re: Re: Daveg5

Originally posted by soggywulf
Yes, perhaps. I am not too familiar with hi-end 3D packages, but I know that these GPU's (whether consumer, gamer, or pro) are not used for non-real-time renders. They can only benefit the low-detail GUI design and proofing displays--so it seems to me that this is a pretty marginal gap in capabilities.

I am still curious as to what Dave means when he says the graphics options are so poor.

Depends on the application, most professional apps (3DS Max, Lightwave, Maya, almost all CAD programs) will benefit hugely from the heavily optimised drivers in these GPUs.

I'm not exactly sure what Dave means either, possibly something to do with optimizing Radeon 9800 drivers? :confused:
 
I, i'm new to this place. I usually speak french, so my english is not very good, but i'll try to do my best.

I work in media, I run photoshop, combustion, after effects, 3d max all days long.

I am planning to buy a powerbook as soon as Apple put a G5 in it.

But for the benchmark, I just want to say that raw power is really not the first think to look at. Thats what i'll get a macintosh as soon as possible.

I run XP every day and trust me it is painly. I use mac os x 10.2 an a g4 867 MP a couple of time and my work get done more quickly then with XP on a faster pentium.

We must think that nobody does big movie rendering all day long.
it's preferable to have a good OS than a faster computer.

What we should have is Final cut pro benchmark, Shake benchmark, etc...

Again sorry for ma really bad english
 
Originally posted by John Q Public
...my issue is not that the G5 is a bad machine...nor do I believe the G4 is faster (although...for now...many synthetic tests paint a differing picture)...when the G5 finally gets an OS (with more than a few optimised libraries) that is written specifically to address it's unique archetecture...and when apps are finally coded to take advantage of the PPC970...it'll be one of the fastest PC's available...

If I had to do it all over again, I wouldn't have been so harsh. My posts had a tendency to come up just before your replies to others, and I was a little too quick to hit the reply button.

On this point, I agree to disagree. We're looking at the same results and are interpreting it differently:

I expected the low-end G5 to perform much worse than the high end FW800 dual G4 in CPU benchmarks that either took advantage of Velocity (Altivec/VMX) or took advantage of two processors. I was pleasantly surprised when it performed comparably or better. IMO, even the underpowered single CPU G5 looks like a very good machine even as it stands today. You might have been expecting it to blow away the G4, especially given its price tag, and and now see evidence that it doesn't unless the benchmark code is recompiled.

BUT as I've stated...ad nauseum...I bought a G4 (to replace a dead G3)...and I found the G4 to be a much better "Fit" for what I wanted/needed in a desktop machine...and seeing that I paid $150 under MSRP it gave me more than enough money to add a few minor upgrades...I'm not defending a purchase...


I bought one of the first G4's (G4/400 Yikes) four years ago and, truth be told, it was too much computer for my needs at the time. Surprisingly, it still serves an essential need today because even it's cobbled-together-from-the-G3 motherboard had enough forward thinking components and a decent-enough CPU. For many people purchasing a G5 (100k preorders and counting as you mentioned), it may be too much computer for them. The nice thing is it will hold its value for a long time and still probably end up with a decent ROI.

Yes, Apple needs to ship much more than 100k G5s. While it represents as much as the first year and a half of iPods, it still is not significant enough yet.

...if Itanic (ITANIUM, Merced or IA64...or whatever name Intel's calling it this week which has again been delayed...this time to Q3 2004) fails...they can easily take the hit...

The Itanium has been out for a couple years now, there are full Itanium 2 configurations right now. Price/performance for a 64-bit server is miserable and now that AMD has a tier one vendor, I imagine Intel now finally has some fire under their butt.

You're probably thinking of Prescott (aka Pentium 5) which is supposed to debut at 3.4Ghz and is rumored to have some hidden 64-bit support (I'll believe it when I see it). It has only slipped schedule from Q3 2003 to Q4 2003 and it looks like they'll make it, even though it is running a bit hot right now (so it won't be breaking 4Ghz this year, like everyone was bragging Intel would be at).

It's a open secret that Intel has had a lot of troubles with their new Fab11x in Arizona getting the 90nm designs from their research Fab to work (11x sort of like the production half of IBM's Fishkill: 12" wafers, robotic, etc.). Prescott is supposed to debut at 90nm.

BTW, so will the 3Ghz G5s sometime next year (I doubt IBM will have any trouble hitting this date, they already fab 90nm chips for other companies out of Fishkill) as well as perhaps a low-powered G5 suitable for notebooks. As for when the Apple will introduce the latter, who knows. My personal view is the Motorola G4 still has a lot of legs (heck they're just shrunk down to 130nm!). More importantly, IBM's G3 still can has some wiggle room (it just reached 1Ghz and they can always tack on VMX or go down to 90nm), so it looks like Apple can put in two more generations of G3s in their consumer line: I think a G5 professional notebook any earlier than Q1 2005 would be "aggressive".

...if AMD's Athlon64 (which still won't have a Microsoft OS until Q2 2004) stumbles...no problem...they've very deep pockets...

The Athlon64 has a Microsoft OS right now because the chip is still 32-bit compatible. Last I heard Microsoft had not set any ship date on 64-bit Windows targetting AMDs, so Q2 2004 may be a little optimistic. I don't think there are any hurdles in particular as I believe a friend of mine had 64-bit Windows for Hammer on his desk for about two years now--I'm only guessing though.

If you want to talk about a chip that has slipped schedule, the Athlon64 was supposed to have debuted in Q3 2002 and now they'll be lucky for Q4 2003!

AMD does not have very deep pockets. They have $700 million dollars in capital and lose between half and a quarter of a billion dollars a quarter--thankfully, they are slowing down the rate of loss. A modern Fab now costs $3-5 billion dollars! AMD Dresden is still an excellent fab, but in order to compete, they're going to be using IBM's Fishkill facility for some production (and rumor has it that they've moved most of the development there too).

In other words, the Opteron and Athlon64 needs to turn the company around in the coming year. (I think they will.)

Isn't it shocking at how rich Intel is? It seems almost as if the rest of the CPU industry can just scrape together enough cash for one 12" fab, while Intel has two (that really should only count as one), is building another (they scrapped one however), and leaves most of their 9" fabs at capcity. The whole chip industry is screaming for a respite and Intel keeps on with their death march.

...BUT...if the G5 fails to draw enough "Switchers" and new users into the fold, it could mean financial trouble to an already unstable company...

At the risk of sounding ill-tempered, where do you get this FUD? Apple has nearly 7x's AMDs capital, is trading just above book value, and has turned a profit nearly every quarter for the last 6 years. In fact, last quarter is the first quarter in a while they've managed a profit greater than the interest they make off their capital.

Given how much money Apple is sinking into capital investments like AppleStores, you realize how shocking it is that it has managed to continue to spend through the downturn and return anything remotely resembling a profit.

No, they're no Microsoft, but there is a reason many have Apple listed as a BUY.

...last and certainly not least...my apologies to WM. and tychay...the "Slow Learner" comment was aimed at one person...not either of you...

Apology accepted. I'd like to reiterate my apology of jumping down your throat thoughtlessly.
 
Originally posted by John Q Public
Originally posted by tychay yes and no...those "Cheap" $150 drives add up when you're pulling 7 of them out of a deceased B&W (smallest drive removed after death was 40GB and yes...somewhat cheaper...but the 80's that found their way into the new G4 that replaced the B&W)

My suggestion in this case was to get a FW800 enclosure (there are models that can hold 6 or more at once). I'll grant out the cabling issues. This is moot since you'll just probably wait until Apple introduces a G5 that holds tons of drives as, based on previous posts, the sort of work you do seems to imply you can hold out for three years, or as long as it takes.

...I've never missed the floppy drive...

The point I was trying to make is that a lot of people, including an innumerable amount of "analysts" did notice the conspicous absence of a floppy drive and thought it would be a deal killer.

On a side note, back in 1993, a friend of mine from Caltech once bet another classmate that floppies would be obsoleted out of systems in a decade. The iMac allowed him to win that bet.

my big issue with the "New" 2002 MDD machines isn't FW800 (although it will be nice when I get toys that support faster than FW400...but that you only have "AirPort" and not "AirPort Extreme" and seeing my intranet here at home (and home office) consists of most wireless machines (all supporting 802.11G)...the faster connections to other machines (namely my laptop) is a good thing.

Good point. I forgot about 802.11G.

...and to be honest...I really don't miss OS 9...most of the apps I use are Carbonized (if not Cocoa) and about the only thing I run that is still old-tech is Diablo and Starcraft as the occasional diversion...

There is a carbonized version of Starcraft out, you can download the updater on the web. It's slower than the classic version, but it should run great on your new G4.

Take care,

terry
 
Originally posted by John Q Public
one last time...for the slow learner spewing propaganda as fact...

Actually Tychay, JQP was referring to me.
:)

Apparently "slow learner" translates roughly to 'annoying git who has refutted all of my points with evidence but who refuses to go away no matter how many times I insist that I know what I'm talking about'.
;-)
BTW... when you [mr. public] claim that DDR doesn't provide more than 3-5% benefit and I find even ONE benchmark that is signficantly faster, you're done. I don't care if you find instances were DDR boards don't benefit much in a particular benchmark... some apps aren't memory starved, some configs don't stress the memory subsystem as much as others (the higher clocked P4s showed more benefit than the Athlons). I provided plenty of examples where DDR solutions with the SAME CPU were MUCH faster than SDR boards. End of argument.... unless you want to back track on that stance... maybe we are all wrong now and you said "DDR is only 3-5% faster on some benchmarks".

Back on tack...

I suppose we should cross post a bit since this is a 1.6GHz Benchmark post (or it was)
Photoshop Benchmarks on 1.6 GHz G5.

The benchmarks show that the 1.6GHz G5 wins in a few Photoshop operations (bakeoff with 50MB test file). The overall winner is the P4 3.06 and the Athlons take a few operations too. Overall, very good performance for a non-G5 optimized Photoshop with the new G5 optimized libraries installed.
I expect the duals to really smoke... especially when Panther ships and we get a fully optimized Photoshop. Panther is already noticably more responsive than Jaguar and it's still a way off from GM (I'm typing on Build 44)
 
Originally posted by ffakr
Actually Tychay, JQP was referring to me.
:)

Apparently "slow learner" translates roughly to 'annoying git who has refutted all of my points with evidence but who refuses to go away no matter how many times I insist that I know what I'm talking about'.
;-)

actually ffakr...the slow learner comment was intended as 'annoying git ignoring generalities made in order to start an arrgument'

:)

...but that is past and buried at this point...

I'll spell out the original intent of the comments on DDR...3-5% was a generalization...most DDR based Chipsets outperform SDR (there are a few mobo's that with SDR outperformed DDR at their time of release)...DDR isn't the cause of improved performance...performance increases are based on improvements in chipset/memory controller design and faster processors...which you've agreed to the point before...

Originally posted by ffakr
When you look to the PC, you find that direct comparisons between SDR and DDR chipsets generally compare the KT133A vs. the KT266. Unfortunately, the KT266 was immature when released and the KT266A performed much better. Also, the KT133 benchmarks that I've seen have been performed with rather slow CPUs (typically around 1GHz). The faster the CPU, the more the memory speed would be a factor.

DDR is only a small part of the equation...usually it's the case of the motherboard chipset manufacturers trying to keep up with Samsung/Seimens/Hyundai/Micron...and not being able to use all the theoretical bandwidth created by the continually changing memory techologies...just like AGP8x having double the theoretical bandwidth of AGP4x...but there's no technology that can exploit it yet (and probably won't with ATi and nVidia working on PCI-Extreme solutions currently)...
 
Originally posted by tychay
The Itanium has been out for a couple years now, there are full Itanium 2 configurations right now. Price/performance for a 64-bit server is miserable and now that AMD has a tier one vendor, I imagine Intel now finally has some fire under their butt.

From I've seen, the price/performance of Itanium 2 servers is about the same if not considerabally lower than the Power4 in TPC and HPC, not quite sure about how good it is as a server (in comparison to the Power4) although it does perform very impressively in MySQL. Opteron has AMAZING price/performance, far better than Itanium 2, Power4++, Alpha 21364, PA-RISC 8K(?), US-III, MIPS 16K, or any other 64 bit chip for that matter.

You're probably thinking of Prescott (aka Pentium 5) which is supposed to debut at 3.4Ghz and is rumored to have some hidden 64-bit support (I'll believe it when I see it). It has only slipped schedule from Q3 2003 to Q4 2003 and it looks like they'll make it, even though it is running a bit hot right now (so it won't be breaking 4Ghz this year, like everyone was bragging Intel would be at).


Heh, not officially at least, I'm sure there will be more than a few geeks (myself included) who are perfectly willing to overclock Prescott to over 4 GHz at the risk of the PC melting. :D

BTW, so will the 3Ghz G5s sometime next year (I doubt IBM will have any trouble hitting this date, they already fab 90nm chips for other companies out of Fishkill) as well as perhaps a low-powered G5 suitable for notebooks.

It's going to be quite a bit harder to manufacture a 90nm PPC970 than the current 90nm FPGAs in production at fishkill right now although I'm sure IBM will accomplish it.
 
Originally posted by Cubeboy
From I've seen, the price/performance of Itanium 2 servers is about the same if not considerabally lower than the Power4 in TPC and HPC, not quite sure about how good it is as a server (in comparison to the Power4) although it does perform very impressively in MySQL. Opteron has AMAZING price/performance, far better than Itanium 2, Power4++, Alpha 21364, PA-RISC 8K(?), US-III, MIPS 16K, or any other 64 bit chip for that matter.

The SPECfp scores in the Itanium2 are great, but I believe the Power4 is the better platform for TPC where the UltraSparc is still the most popular. It seems anytime the Itanium2 vendor manages a a better price/performance, IBM and Sparc manage to lower the prices or tweak up the performance. There is a lot more give because the margins are so high. Besides the tactical errors that went into the design of the Itanium (water under the bridge), I think there was a general strategic error: the Itanium needs to not inch out the US or Power4 in price/performance, it needs to beat it by a lot.

In HPC, people aren't going to consider SPECfp when they can certainly benchmark their custom code directly and make their own decision. In general this means that Intel has not got the economy of scales for HPC/TPC in the Itanium that they enjoy in the Pentium.

Of course, the whole thing is debatable. What isn't is that Intel is selling nowhere near as many Itaniums as they'd have liked nor has HP/Compaq yet justified dropping their lines to throwing their chips with Intel.

As for the Opteron, it's a great chip with the best price/performance and a great future. But enterprise vendors have been slow to adopt new platforms, so we'll see. It'll probably do the best making inroads from the low-end due to it's backward compatibility. The 970 (aka G5) may give it a run for the money in price/performance in Q1 2004 when IBM introduces their blades (to drag this discussion back to the Mac world).

It's going to be quite a bit harder to manufacture a 90nm PPC970 than the current 90nm FPGAs in production at fishkill right now although I'm sure IBM will accomplish it.

I agree. I was just making a slight dig that nothing has actually come out of 11x since Intel has delayed the Pentium M and Prescott slightly. I'm sure Intel will accomplish it, but its nice to kick a bully when he's down. ;)
 
Originally posted by soggywulf
Dave, might I suggest using punctuation. :)

Tests show that even with 256MB video cards, performance is pretty weak (less than 50 fps) when you start going above the resolution range where 256MB cards move ahead of 128MB cards (more than about 1600x1200 at 4xFSAA). Any resolution that is reasonably playable with 256MB gets the same frame rates on 128MB cards. Which means effectively a 128MB card is just as good as a 256MB version of the same card. Even for future games.

The reason for this is that the GPUs currently can't handle such high resolutions anyway no matter how much memory you throw at them--that is, the GPU is the limiting factor at those resolutions. So the extra memory is overengineered (for marketing purposes, really) and useless. When a faster GPU comes out, you'll have to get a new card to use it anyway--and that new card may well be able to actually effectively use the larger memory it comes with.

While this is true for current games (and in cases where RAM is taken up by GPU functions such as AAliasing), future games will take a hit from "only" 128MBs of RAM because they will have more than 128MBs of textures. There's only one map on UT2003 with the highest settings that uses more than 128MBs of RAM, and it ends up using 131 or something like that. If you can keep it all on the card, then you don't have to hit the AGP bus. Even with 8x AGP, it's a lot slower than having on card RAM.
 
Originally posted by tychay
The SPECfp scores in the Itanium2 are great, but I believe the Power4 is the better platform for TPC where the UltraSparc is still the most popular. It seems anytime the Itanium2 vendor manages a a better price/performance, IBM and Sparc manage to lower the prices or tweak up the performance. There is a lot more give because the margins are so high. Besides the tactical errors that went into the design of the Itanium (water under the bridge), I think there was a general strategic error: the Itanium needs to not inch out the US or Power4 in price/performance, it needs to beat it by a lot.


Interesting links, I'm actually basing my conclusions off of the TPC-C (www.tpc.org) benchmark. As I stated before, Itanium2 based HP servers have significantly better price/performance than IBM eServers in this particular benchmark.

Link: http://www.tpc.org/tpcc/results/tpcc_perf_results.asp

IBM eSeries p690 turbo
tpmC: 763,898
price/tpmC: $8.31 US

HP Integrity Superdome
tpmC: 786,646
price/tpmC: $6.49 US

HP Integrity Superdome
tpmC: 824,164
price/tpmC: $8.28 US

As you can see, although Power4 based servers perform very impressively in TPC-C (this is expected considering Power4's massive bandwidth and huge caches), it's price/performance is significantly below that of Itanium based servers, at least in this benchmark.

In HPC, people aren't going to consider SPECfp when they can certainly benchmark their custom code directly and make their own decision. In general this means that Intel has not got the economy of scales for HPC/TPC in the Itanium that they enjoy in the Pentium.

Thats absolutely correct, people aren't going to consider SPECfp much, thats why I'm basing my conclusions off of results I've seen from various HPC applications, namely NAS-Parallel, Gaussian, NW-Chem, MM5, and Star-CD, in which large (32+ processor) Itanium based servers consistently outperformed comparable (32+ processor) Power based servers by significant amounts.

Of course, the whole thing is debatable. What isn't is that Intel is selling nowhere near as many Itaniums as they'd have liked nor has HP/Compaq yet justified dropping their lines to throwing their chips with Intel.

It is my belief that the (main) reason Itanium is not selling well (and the reason SGI/HP/Compaq are aren't dropping their venerable Alpha/MIPS/PA-RISC lines) has to do with the customers, who don't think the benefits of switching to a new architecture (whatever those are) displace the cost (of getting new software, training everyone in the use of that software, etc) any vendor who is going to force a Itanium based solutions on customers who would prefer a RISC based solution provides natural opportunities for customers to switch to other vendors.

Having said that, I can't say any of the aforementioned companies are pushing their RISC chip lines very much either, rather, they seem to be running out the clock on these chips.

As for the Opteron, it's a great chip with the best price/performance and a great future. But enterprise vendors have been slow to adopt new platforms, so we'll see. It'll probably do the best making inroads from the low-end due to it's backward compatibility. The 970 (aka G5) may give it a run for the money in price/performance in Q1 2004 when IBM introduces their blades (to drag this discussion back to the Mac world).

True, although IBM has recently adopted a Opteron based platform (with excellent SPEC scores attained from running code compiled by none other than GCC 3.3 and PGI 5.0 :D) and other companies are considering it. Seeing that there have already been several contracts for Opteron based supercomputers (proving that it can scale very well), I wouldn't be very surprised if we see midrange and highend Opteron based servers in the near future.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.