Originally posted by Falleron
Well, that demo was using the 9600 + not the 9700 that you mentioned in the PC.
No, the Mac numbers are from Apples website where it says they used the 9800pro.
Originally posted by Falleron
Well, that demo was using the 9600 + not the 9700 that you mentioned in the PC.
Originally posted by MrJamie
The demo used by Apple is 02, I think, while all the others use 01. Not sure how much a difference that makes, but it could be significant.
Originally posted by hvfsl
Thanks I did not realise that, if true it would explain the difference and Macs could actually be faster than PCs as Apple says. (It says the 3Ghz P4 gets 200fps with the Dual G5 gets 300fps, they are using the same graphics cards and settings.)
Originally posted by ColdZero
A 3Ghz P4 getting 275fps in Q3 is absolute BS. Benchmarks from Yesterday The 3.0 P4 is getting 394 fps the 3.2 is getting over 400fps. I'd love to know how Apple got the score of 275, how do you misplace 100+fps? Was there stuff running in the backround. People say "Who cares if it runs at 400fps, I can only see 30fps". The answer to that is, sure you can play Q3 with a computer that gives you 55fps and it will be fine, but when Doom3, Half Life 2 and other games come out, are you gonna want the extra horsepower that the 3.2 has or do you want that 55fps computer to play it on? Its about how long you can use this computer to play games, not that it can get 400 fps in Q3 today.
1) that's bull****. your wintel machine is only crashing so much because you probably have little to no idea what you're doing.
from your previous posts i'd say you're just another mac zealot.
2) Apple paid a group called Veritest to complete a set of testing on the G5 2 GHz and a pair of Dells. This report contains the results. How interesting, Apple wins. But look deeper and you'll find the reasons why.
http://www.specbench.org/osg/cpu200...ts/cpu2000.html
That page is SPEC's official site. You'll note the scores for the dual 3.0 GHz P4.... both FP and Int base scores breaking 1200 each, which is 50% higher than the G5's supposed scores. This cant be right you say, Apple's tests show that the dual 3.06 Xeons get a hair above 800 in FP, and about 880 in int base. Well, read the testing situation.
Apple, in all it's kind fairness, had the PC hobbled. First off, they fed it Red Hat Linux 9.0, as we all know, Red Hat is not a good example of Linux, and Linux is not the pinnacle of speed. Second off, they required the PC to compile everything with GCC 3. Now this is the clincher. Apple's GCC on OS X is her native compiler, Apple has optimized the everliving **** out of GCC, submitted patches to the group in charge of GCC to altivec optimize, and everything else to that effect. The x86 version of GCC is dog slow, both Borland and Intel Optimizing Compiler blow it out of the water. Last I checked, the compiler in MS VC++ was a better performer.
So, under these nice testing guidelines, where everything in the system is compiled with GCC3, the G5 is a faster processor. Nevermind that such a situation DOES NOT EXIST in the real world. And you'd be real lucky to find a Xeon 3.06 running Red Hat.
Grr, way to ruin even a good product release Apple. Instead of doing the good thing, and charging $1400 or so for the low end machine, because it uses almost the exact same mobo as the old G4, there's no real difference, they charge $1999. Of course, dont let people know that the G5 is a smalller core, and costs less to make than the G4 (those 2 MB of cache really add up).
3) Alright, taking the kid gloves off here.
--------------------
Note 1: Feelings on G5:
Plagurized a bit from another forum.
From a merely technical perspective, I'll start.
What do I think? Alright I'll start with the positive.
+ Apple is now competitive with 2.4 GHz P4 processors when running 32 bit operations
+ It's 64 bit, which helps a: with addressing large chunks of RAM or databases, and b: with doing monsterous integer operations, like those done in Mathmagica
+ Serial ATA, USB 2.0 "Hi-Speed" (marketing people suck), Firewire 800, 802.11g, front mounted ports.
+ Fast ass FSB, even in the Intel world.
+ The dual FSB implimentation allows for better memory bandwidth to each processor
+ 8 GB of RAM, in the same form of dual channel used on the Opteron (not really dual channel, more like double width, the old 2 chips = 1 bank)
+ At bare minimum, 64 bit/33 MHz PCI slots, at best 64 bit/133 MHz PCI-X slots.
Now, this is gonna hurt.
- Chip is not more competitive with Intel's upper speed chips at running 32 bit apps, meaning anything out today will NOT RUN BETTER on G5 without a recompile at the least, and a code cleanup at the worst.
- ****ing UGLY case. Not even Lian Li would make something that abominable. For the first time, I'd rather own a PC case than a Mac, by far.
- Removed things, only one optical bay and 2 3.5" internal bays (really, why apple? Everyone in the PC world has at least 2 5.25", 1 external 3.5", and 3 internal 3.5" bays, and the good cases have 4/2/4)
- Back to only 3 PCI slots. SCSI card + M-Audio Revolution leaves only one left for future expansion.
- ****ing expensive. You know, the 970 is a MUCH SMALLER chip, and costs less money to produce than the 7485s, not to mention doesnt have to have 2 MB of DDR SRAM cache to perform at a reasonable level. The 1.6 GHz model should be around the $1500 marker without monitor. Anything more is being unreasonable.
- Change in FSB types means that dual processors can no longer cache snoop on each other, which means that if some thread gets reassigned from one processor to another, there's gonna be a massive slowdown while the cache waits for the first load command from that thread, since it doesnt already have the memory in cache the way cache snooping allows. Better way to handle this would have been to have had a single 500 MHz DDR (1 GHz, it's important to note the distinction between a physical clock and a DDR transmission "clocK", because the CPU's speed is derived from the physical clock, not the DDR transmission clock) FSB that was 64 bits wide, since the memory subsystem as it is cant saturate that, shared between the two processors, so that they could still cache snoop.
Now most of my problem with this isnt the actual computer. It's Apple. Fradulent benchmarks used to portray their computer as faster than it is. Charging way more money than is fair both for the computer, and for the BTO options. Lying about being the first 64 bit desktop/workstation chip out there, Opteron is already out, and Athlon 64 will beat them to market, with a real launch in August while these machines dont ship until September. Oh and not comparing to any AMD stuff at all. More like pretending that AMD doesnt exist. I can promise you that an Athlon XP 2800 or so would have thrashed Apple in Apple's own paid benchmarking, because AMD's architecture is much simpler than the P4 architecture which depends on having long streams of execution, and as such, the Athlon paths in GCC are much much closer to a good compiler than the P4/Xeon paths.
--------------------
Note 2: Calculation of dBA:
Plagurized from Ars C&CF forum.
The Health and Safety Commission put together an Info sheet for understanding how sound works. I think it is very handy to know how dBs add up when you are trying to build a quite (quiter) system.
According to nohsc.gov, for every 6 dB difference between 2 fans, you add 1dB to the loudest fan's dB rating, and you get the sound level that the human ear perceives.
e.i.
Two 80mm each rated at MAX 32 db and 40 CFM will produce better results at than one 120mm fan rated at MAX 32 db and 60 CFM because with the two 80 mm fans you get 80 CFM for the same 32 dB level.
So when trying to quite down your system, dont just assume that throwing one big fan (120 mm or 92 mm) in your case is the best solution. You need to consider the performance you can get out of multiple smaller quiter fans.
http://www.nohsc.gov.au/OHSInformat...e/NOISECONT.HTM
It's important to read that link at the bottom.
Originally posted by pianojoe
It's unethical to swear at somebody just because he sees things differently. (This is, indeed, called discrimination.)
Originally posted by Farside161
C: a 1.6 Ghz with PCI-X, S-ATA, AGP 8X, and 4 GB max ram for $1999 is not expensive
Originally posted by MrJamie
I still don't see how the macs are worse.
Apple posted to explain the benchmarks here, and the overclockers/AMDTech articles are, for the most part, opinionated rubbish. They do make some good points that I agree with though.
edit: link fixed
Originally posted by ColdZero
Really? Would you mind pointing out what is so opionionated with a Quake 3 benchmark that tells you the detailed specs of every system that it is running on? After reading an AnandTech article I could go out and recreate the benchmark on the exact system if I wanted to. Maybe Apple could learn something about benchmark details from those opionated rubbish tech overclockers. Who I might add have nothing to gain from baising a benchmark.
oHe said Veritest used gcc for both platforms, instead of Intel's compiler, simply because the benchmarks measure two things at the same time: compiler, and hardware. To test the hardware alone, you mus t n o rmalize the compiler out of the equation -- using the same version and similar settings -- and, if anything, Joswiak said, gcc has been available on the Intel platform for a lot longer and is more optimized for Intel than for PowerPC.
He conceded that the Dell numbers would be higher with the Intel compiler, but that the Apple numbers could be higher with a different compiler too.
Joswiak added that in the Intel modifications for the tests, they chose the option that provided higher scores fo r the Intel machine, not lower. The scores were higher under Linux than under Windows, and in the rate test, the scores were higher with hyperthreading disabled than enabled. He also said they would be happy to do the tests on Wi ndows and with hy pert hrea ding enabled, if people wanted it, as it would only make the G5 look better.
In the G5 modifications, they were made because shipping systems will have those options available. For example, memory read bypass was turned on, for even though it is not on by default in the tested prototypes, it will be on by default for the shipping systems. Software-based prefetching was turned off and a high-performance malloc was used because those options will be available on the shipping systems
(Joswia k did not know whether this malloc, which is faster but less memory efficient, will be the default in the shipping systems).
Finally, a voice of reason in a crowded room of zealots... Welcome.Originally posted by scem0
This is taken from another forum where I am talking to someone else: