Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by locovaca


That's because L3 cache costs a lot of $$$, and is usually found on high end workstation/server chips. A majority of the X86 community is the "cheap boxes" community, and adding L3 cache would make it much more expensive and would provide little benefit for their needs. L3 cache is better suited to databases and other software where you have large datasets you need quick access to. Memory that fast is expensive, especially in the quantities that Apple is using (1-2 megs).

The Intel Xeon MP chip has 512k - 1 meg of L3 cache.

I'm confused. If a large data set or database is measured in tens, hundreds, thousands of megabytes, then I should think that L3 cache would only help for frequently requested fields/records, provided the software knows to identify and cache frequently queried data. For example, if a database were a gigabyte in size, there would there only be a benefit from 2MB of L3 cache if a majority of the queries were for only about 2% of the data? Is that common? I don't get it; please help.
 
Originally posted by rugby
I can show proof that my G4/400 is nearly twice as fast as a Dell P4/1.7ghz. Run rc5 on both. Talk about pathetic, how can this poor G4/400 beat the pants off of a P4/1.7?

heh :D
 
Originally posted by AmbitiousLemon


no im not talking apple's benchmarks if i was id say the powermac is 90% faster than a 2.5intel. benchmarks are available all over the web from many independant sources. i quoted some above. every source i have found indicates someting similiar. the old powermacs were down about 20% from the top of the line amd. and about even with a 2ghz intel (i havent seen benchmarks against a faster intel).

And were these single processor Macs against single processor x86 machines?
 
Originally posted by eirik


I'm confused. If a large data set or database is measured in tens, hundreds, thousands of megabytes, then I should think that L3 cache would only help for frequently requested fields/records, provided the software knows to identify and cache frequently queried data. For example, if a database were a gigabyte in size, there would there only be a benefit from 2MB of L3 cache if a majority of the queries were for only about 2% of the data? Is that common? I don't get it; please help.

Many databases, such as those behind websites, access the same data on a very frequent basis (say, for the front page). This is the data that eventually stays in the cache because it's accessed so frequently. In other databases settings, a particular record that is frequently searched for may stay in cache, and thus less clock cycles are spent on finding the same set of data.
 
Originally posted by AmbitiousLemon
ist faster on paper but you drop it into the real world and usb2 seriously lags behind fw.

Huh? I do not follow, firewire is better because of how it writes and reads, but claiming thats anything like the G4 vs P4 vs AMD is ludicrous.
 
AmbitiousLemon:

"sorry im not impressed. basically all you said is that despite an education (from where i wonder) that you still will ignore all tests of real world performance i favor of a benchmark that in no way translates to the speed of the processor."

Whew, talk about blinders!

I didn't say I would ignore other benchmarks at all, I said that SPEC means more to me than Photoshop by Apple and than RC5. Woo, look, 3 whole tests! Amusing that you tried to interpret this as all tests in the world.

Your claim that SPEC does not reflect in any way the speed of the processor is absoluty rediculous. What do you think they are benchmarking anyway? How fast monkeys can be trained to say the processor's name? Perhaps they just make up those numbers to make Apple look bad?

"furthermore you ignore everything regarding the differences in the archetecture of the chip all to stare blindly at a single benchmark that says something different from every single other benchmark performed on these machines"

Oh great. Tell you what, my applications don't run on buzzwords or marketing speak. Nor do they run on an irrational desire to have Apple defeat the vast Forces of Evil or whatever x86 is in your view. I bring up SPEC because it is industry standard. IBM, Sun, Intel, AMD, SGI, Compaq, HP... you'll find them all there. Note that IBM even runs their Power4 through SPEC and tells everyone about it. Guess what? When a G4 runs the tests (a wide variety of algorithms, int and float) it is shown to be on par with a Pentium III. Thats it. This isn't Photoshop, this isn't RC5, this is industry standard. This is the real world, not Apple cute backyard playset where you find Apple-friendly apps. It's better than these unnamed bechmarks that you seem to believe show the G4 to be close to as fast as top of the line P4's.
 
Originally posted by ddtlm

Oh great. Tell you what, my applications don't run on buzzwords or marketing speak. Nor do they run on an irrational desire to have Apple defeat the vast Forces of Evil or whatever x86 is in your view. I bring up SPEC because it is industry standard. IBM, Sun, Intel, AMD, SGI, Compaq, HP... you'll find them all there. Note that IBM even runs their Power4 through SPEC and tells everyone about it. Guess what? When a G4 runs the tests (a wide variety of algorithms, int and float) it is shown to be on par with a Pentium III. Thats it. This isn't Photoshop, this isn't RC5, this is industry standard. This is the real world, not Apple cute backyard playset where you find Apple-friendly apps. It's better than these unnamed bechmarks that you seem to believe show the G4 to be close to as fast as top of the line P4's.

AMEN!!!! :D :D :D
 
Originally posted by kenohki


What benchmarks are you talking about here? Steve Jobs' Photoshop bakeoff. Marketing that is. Run a different set of filters that are SSE2 enabled and the P4 will come out ahead.

Intel gave Adobe assembly code for SSE2, so the filters ARE optimized for the P4. Do some research next time.

On the other hand, only two or three filters are optimized for AltiVec (lighting effects is one), so it's a pretty even match.

One other point is that these Photoshop tests are not just running some filters, as PC users seem to love to say. There is a lot of transforming and rendering involved. The reason Apple uses Photoshop is that both the Windows and Mac OS versions are based on the same code.

I use Photoshop for a living on both G4s and P4/Win2k boxes, and I can attest it's faster on Macs.

On the other hand, AfterEffects is a dog on OS X, and runs much faster on Windows, so this is a case where Adobe did not optimize their code for OS X.


I'd love to see an independent standard benchmark like SPEC (or TPC-C since I'm a DBA). Granted, SPEC isn't always the best indicator of system performance but it's pretty much the best independent benchmark out there for processor performance. And the G4 doesn't end up in a dead heat against a 2GHz P4 in SPECcpu.

The problem with the SPEC benchmarks is usually the compiler. The Mac version is poor quality. The problem is no benchmark is equal between two different platforms.
This is why Intel benchmarks are always better than another benchmark on the same CPU, because they have highly optimized compilers. Also it only tests the CPU, and not the whole system. In July 2000 NASA did some tests on the G4's, and felt they had better performance then Intel and Alpha CPUs.

This paper describes work conducted at NASA Langley Research Center during an evaluation of PowerMac G4 systems for FORTRAN-based scientific computing and computational fluid dynamics simulation. A PowerMac G4/500 was configured for dual booting into Mac OS and Linux operating systems. Various developer tools were used to compile and run test codes on the G4 for comparison benchmarking with platforms including Cray C-90, Compaq Alpha, Pentium III, and Silicon Graphics (SGI) systems. Following general benchmarking, more specific AltiVec testing was conducted on the G4 using FORTRAN and C, and approaches for implementing AltiVec in generic FORTRAN computations were developed.

Results indicate that the PowerMac G4 system has the potential to be an inexpensive high performance scientific computing platform. Much of that potential is currently unrealized, however, due to the limited amount of AltiVec support in FORTRAN. Without the parallel vector processing capabilities of AltiVec, the G4 places near the end of the pack in performance tests using standard FORTRAN scientific codes. In limited cases where AltiVec acceleration was available and tested under FORTRAN, the G4 showed a clear advantage with 4-7X greater performance and a 5-8X greater cost effectiveness than all other workstation systems evaluated. Examples presented in this report show that only minor re-coding would be necessary to implement AltiVec instructions if they were accessible to standard FORTRAN programming. Because of this, there appear to be many opportunities to advance scientific computing on the PowerMac G4 platform.


Benchmarks are only good if you spend your day running benchmarks, but most people do other things with their computers. ;)
 
OMFG Cyber**** IS RIGHT!! these are 7470s!!!! OMFG!!!!! HAHAHAHAHAHAHAHAHAHAHA

WHY?

well, there are no 867mhz 7455's only 867mhz 7451s and i doubt apple is using the 7451s again hehehehhehehehehehehehehehehehehheheheh

from motorola.com

err i hope?

or maybe thats why the 867mhz system is so cheap... cuz there using 7451s.....
 
eirik:

The common principle behind caches is that data recently accessed is mostly likely (it's a big game of chance) to be accessed again in the near future. Although the database may be HUGE, critical parts such as indexes are used a lot and tend to be in the cache when they are needed even if all the main data is out in main RAM somewhere. Additionally, with large databases, there are at least two accesses: once to look up an index (could be many accesses), and once to get the data. Having half that cached can help a lot.

The algorithms that decide what stays in cache are actually amazingly simple (in theory), usually just an efficient version of "least recently used gets kicked out".

Of course it's not possible to search the entire cache for what was actually the oldest (most stale), so the have ways to guess and it does a pretty good job. The situation is actually more complex for set-associative caches (the most common type by far), but it at least starts out simple.

[Edit: Fixed spelling, wording.]
 
Originally posted by DavidRavenMoon


Intel gave Adobe assembly code for SSE2, so the filters ARE optimized for the P4. Do some research next time.

On the other hand, only two or three filters are optimized for AltiVec (lighting effects is one), so it's a pretty even match.

I know they're optimized. What I'm saying is that Apple excludes filters/operations that would heavily benefit from the SSE2 optimizations. And when the PC zealots run it, they stay away from ones with AltiVec optimizations.


The problem with the SPEC benchmarks is usually the compiler. The Mac version is poor quality. The problem is no benchmark is equal between two different platforms.
This is why Intel benchmarks are always better than another benchmark on the same CPU, because they have highly optimized compilers.

True, and I understand this. But SPEC is the industry standard and PowerPC ISA processors (POWER4) can scream on it. Everyone seems to participate except Apple, who sit's back and poo poo's the whole thing.
 
DavidRavenMoon:

I won't call SPEC perfect, but I'd say it's not bad for a multi-platform test.

It is true that compilers are a big deal, however in the tests I am basing my opinion on (by c't I think) both the Pentium III and the G4 where using similar gcc compilers. The P3 scores 30%+ with Intel's compiler. So you see that I was already being nice to the G4. :)
 
What do they mean when they say this:

—256MB DIMMs (64-bit-wide, 128-Mbit)
—512MB DIMMs (64-bit-wide, 256-Mbit)



what is the Mbit rating ? how do I make sure I buy the right kind of mem ?
 
cyberfunk:

Yes Apple uses standard memory. I'd recommend Crucial.com for it, thats what I did for my dual 800. Remember that before, some people with cheap RAM had it made unusable by an Apple firmware upgrade.

[Edit: I see Crucial does not have the new PM's listed in their memory selector, so if you want to play it safe with the compatibility gaurentee give them a week or two. The Xserve is listed.]
 
Originally posted by ddtlm
cyberfunk:

Yes Apple uses standard memory. I'd recommend Crucial.com for it, thats what I did for my dual 800. Remember that before, some people with cheap RAM had it made unusable by an Apple firmware upgrade.

Right, thats what I'm worried about, but PC 2700 is already pretty expensive for 512 MB (111$)

I want to know what they mean 64 bits wide by 256 Mbits/128 Mbits
 
Re: Re: Re: Re: Not too expensive, you are just too cheap

Originally posted by davei


Check out the MJ-12 DDR from Alienware:

http://www.alienware.com/main/system_pages/mj12 ddr.asp

Nearly the same price as the "Fastest" Powermac ($50 difference), but there's more memory (albeit technically slower), much better video, 5.1 sound, no Superdrive, no Firewire, slightly more HD space, no Gigabit ethernet.

You can draw your own conclusions, but I don't think the top end Powermac is all that bad for $3300, except on processor speed.

Well, the Alienware does include a free mousepad..... that's gotta be worht something, right?
 
Originally posted by eirik


I'm confused. If a large data set or database is measured in tens, hundreds, thousands of megabytes, then I should think that L3 cache would only help for frequently requested fields/records, provided the software knows to identify and cache frequently queried data. For example, if a database were a gigabyte in size, there would there only be a benefit from 2MB of L3 cache if a majority of the queries were for only about 2% of the data? Is that common? I don't get it; please help.

You are confusing this with main system memory. Only the CPU accesses the L3. The information requested are instructions being used by the CPU, not data used by an application. A database app will access main system memory and data on the hard drive. If you are using a program like Photoshop, and if you have enough RAM, it will load the entire image in RAM to speed things up. But even while Photoshop is accessing memory, your CPU is doing its own thing, both running Photoshop, and the rest of the system too.

So when Photoshop request an operation, say opening a file on your drive, the CPU is doing the work for Photoshop, and also any other programs or tasks running. If it thinks if might be called on to repeat a task (through branch prediction, etc.), it stores it in cache memory.
I'm over simplifying it, but that's the general idea.
 
For instance, this is the info I get on Crucial 2700 ram:

DDR PC2700 • CL=2.5 • Unbuffered • Non-parity • 6ns • 2.5V • 64Meg x 64



64 Meg x 64 means what ? I dont see how the numbers 256Mbit and 64 Meg are related, to get from bits to bytes, one divides by eight, but 256/8 = 32 not 64.. so I'm confused...
 
cyberfunk:

Well I can tell you what the "megabit" is all about, but I can't tell you why Apple cares. The megabit rating tells you how big each chip on the RAM DIMM is, and since for example the 256MB DIMM is 128Mb (little b = bit), we know there has to be 16 chips of RAM on it. (Remember one byte = 8 bits, so 128Mb * 16 = 256MB.)

Why does Apple care? Beats me.
 
less VRAM?

Did Apple drop the amount of RAM on the GeForce4MX from 64MB to 32MB when they moved it to the entry level machine? !:confused:
 
Here is my $0.02...

I use both macs and pcs and honestly find the "speed difference" negligible.

But here is the interesting thing. PCs are up to 2.5 GHz while Macs are now around 1 GHz. So when these SPEC benchmarks periocially come out, a PC should be 150% (2.5X) faster than a mac on the various tasks but this is NEVER the case. At most, the mac loses by 20% or even comes out faster, depending on if the test uses Altivec.

I read those test results where TechTV or whomever says, "Macs are crushed by PCs" but when you look at the results, it is a mere 20% margin at most. When your clock speed is 2.5X, I think that you should be ashamed to be only 20% faster. It is obvious that intel has found a way to scale up their MHz without really increasing real-world speed.

But all this aside, will one be able to type a paper or report 3 times faster on a 3 GHz PC (when it becomes available) than on a 1 GHz PC? Oh, I bet you would be able to type up an e-mail and send it faster....give me a break.

Most people don't need or even use all the speed anyway.
 
I got one

I bought a PowerMac dual 1 gigz on August 2nd. Luckily they hadn't shipped it yet, so they let me switch my order to the NEW dual 1 gigz.

So I got the new one with the Nvidia Ti, and 120 gig drive (the old one was going to have an 80 gig drive w/ the Nvidia Ti) for $600 less than I was going to pay for the old version. The dual processors are now way too good to pass up.

Now, where to get 512 DDR ram at an excellent price? hmm. If you guys have any advice let me know.
 
cyberfunk:

In your "64 Meg x 64", I can tell you the second "64" stands for bits of width. Note that the ECC RAM is 72, i.e. it has an extra bit for parity on each byte (so it would have 9 or 18 chips to a DIMM rather than 8 or 16). I have to assume that the "64 Meg" refers to megabits, but honestly I am not sure and don't spend a lot of time looking at this stuff.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.