Originally posted by MisterMe
Unlike you, I actually know string theorists and they like Macs. Real scientists like to get their work done, not futz around with computers. As for the term velocity, there is really only one relevant dimension in computation, the line between the beginning and end of the job. In that respect, velocity would denote the rate at which you are moving forward or backward. Since your computer task can't move backward even on a Windows machine, speed is the more appropriate term.
ve·loc·i·ty __ (_P_)__Pronunciation Key__(v-ls-t)
n. pl. ve·loc·i·ties
Rapidity or speed of motion; swiftness.
Physics. A vector quantity whose magnitude is a body's speed and whose direction is the body's direction of motion.
The rate of speed of action or occurrence.
The rate at which money changes hands in an economy.
Originally posted by Bengt77
Wrong! The G4 still has some powers not unleashed yet. Just throw in a true DDR FSB (say 400MHz) and use corresponding (duh) DDR RAM and you'll see the (dual) G4's performance go up to quite a competitive level with today's Pentium 4s. At least, that's what I think. The G4 has never been fully utilised to it's full potential, which is a terrible shame, really.
![]()
Originally posted by Masker
And, if you're going to get all physics on us, use the correct term: velocity not speed. [Edit: Hmmmm. For cps or Hz, velocity isn't exactly right, either...]
Originally posted by P-Worm
Do you think that IBM will be using the 970 for things other than Macintoshs? Maybe there own desktops? Seems a little unlikely, but we don't want to be swept with the current...
P-Worm
Originally posted by hayesk
If Intel can say the P4 makes your colours more vibrant and your internet surfing more rewarding, then why not fight fire with fire?
Originally posted by Cubeboy
Another thing to note is that a program that fits into a 32 bit model and doesn't take advantage of a specific 64 bit capability (scalability, memory, high precision arithmetic) will probably perform better compiled as a 32-bit binary than a 64 bit one due to more cache misses associated with the size of the 64 bit binary.
Originally posted by Vlade
Will the 970 have DDR400 or what?
Originally posted by mkaake
i don't think the g4 is all that bad - i think we're all getting caught up in the mHz myth that we love to bash others about. i mean, i'm writing this on a 266 g3, and it runs just about everything i want it too (except x, of course...) (and yeah, i know there are ways to make it, but it's not worth it on this machine). pump it with ram, and the only times when you see the age of the proc is when you rip and encode in itunes. and the occasional bout when you're running 4 or 5 programs at a time.
i think the g4 bashing has become kind of a sport and we've forgotten that it's still pretty powerfull...
oh well . think what you will.
<braces for impact>
matt
Originally posted by jettredmont
Um, no.
Originally posted by matznentosh
Actually, there have to be at least 4 spatial dimensions plus the time dimension. Space is curved as can be seen clearly by the bending of light rays coming from distant stars. That curve has to occur in a different spatial dimension than the 3 we experience with our senses. As for string theory, most of my physics friends use unix boxes. They haven't gotten the Mac bug yet.
Originally posted by Frobozz
PS8 Won't Work on OS 9
Could this mean something more than a simple Carbon to Cocoa conversion? I think it might. What other reason would they have for dropping OS 9?
Perhaps it's because they will be incorporating 970 enhanced code? Is there any reason why supporting the 970 would exclude OS 9? If not, it may just be a reason to stop QA'ing OS 9 support? Who knows.
It's no more modern than an AMD K6-3 (which was of course a K6-2 with the on-die L2 and motherboard L3). In addition to the similar caches, they have similar slow FSB's, weak scalar floating point units, poor clock speed scaling, and no hardware prefetch. I'm not interested in looking up K6 details, but I'd be surprised if it had significantly worse out-of-order or superscalar execution capabilities than a G4. Perhaps someone else feels motivated to continue the comparison.The truth is that the G4 is a modern processor. It happens to have some problems (too-slow FSB, etc.), but it is a robust, powerful, modern processor.
There is nothing that suggests the PPC-970 will outperform the Intel chips of its time in most apps. Already Intel has P4's with an equally fast FSB and similar caches, that score better in SPEC (the only performance numbers really available for a PPC970). Intel's next revision, needless to say, will not be slower. The current P4 core has been around since January 2002 and the next one will be out this year. Intel is a tough compeditor.And what we're excited about is not the prospect that Apple will be competitive again, but that we may be coming into a new 'Golden Age', akin to when the G3 came out; that 970 based Power Macs will not simply be competitive with P4s, but will significantly exceed their performance
This looks to be pure speculation. Wishful speculation at that. Yes a 400mhz FSB and solid RAM would boost certain AltiVec benchmarks a lot, but since when has that been the problem with the G4? If bandwidth were the problem, don't you think that Apple's new 166mhz FSB computers would be able to demonstrate a nice performance boost? Don't know about you, but I was troubled by the 17" AluBook's unimpressive CPU performance compared to the old PC133 15" TiBook.The G4 still has some powers not unleashed yet. Just throw in a true DDR FSB (say 400MHz) and use corresponding (duh) DDR RAM and you'll see the (dual) G4's performance go up to quite a competitive level with today's Pentium 4s.
Originally posted by jettredmont
(and, I'd suspect that the compiler will have a separate switch for allowing >4GB addressable memory which would twiddle the pointer size back down to 32 bits as 90% of all apps won't need that much addressable memory).
Originally posted by AidenShaw
Using 64-bit pointers, however, does increase the runtime footprint of a program. For some programs, the pointers represent a fair proportion of the total data. Programs that keep their data in lists, double-linked lists, or trees often have that issue.
The larger data size lowers performance in two ways:
- more memory bus bandwidth is needed to move the pointers
- the pointers occupy more space in cache, thereby reducing the effective size of the caches
Originally posted by Rincewind42
But if you are using so many pointers, then you data has already violated ....If you are using that many pointers, even at 32-bits you are losing a lot of performance.
Originally posted by ddtlm
Snowy_River:
As for your assertion that the G4 is powerful, well thats just rediculous. There are plenty of benchmarks that illustrate what happens to G4's that venture outside of an hand-sewn Altivec cocoon.
Originally posted by Rincewind42
Or it could simply be that they are using Carbon calls that are exclusive to MacOS X. Or they have switched to MachO and are using other MacOS X Frameworks. It is complely possible (and I bet very likely) that Adobe will continue to use Carbon for it's existing applications. Remember, they were one of the early companies that complained about having to rewrite for Cocoa - why would they do so now _after_ getting a Carbon version on MacOS X (as opposed to before when they had tons of lead time and inside access that numerous other devs didn't).
And of course, it is also possible that they will be putting out a 64-bit version of Photoshop - but they will still put out a 32-bit version for the majority of people that won't have 970s.
Originally posted by AidenShaw
Losing performance compared to what?
Your statements seem rather naive - some algorithms naturally run the fastest by using pointers. Think of a b-tree with small nodes, sure the pointers will be a fair percentage of the data - but you're quite unlikely to come up with a faster way of solving the problem.
Many other important applications (fluid dynamics, semiconductor design, ...) also have a fair number of pointers.
Originally posted by dongmin
well i don't know if cocoa will improve the performance but the recent versions of Photoshop and Illustrator for OS X really really blow. They need to do SOMETHING to make them run better.
Originally posted by Rustus Maximus
Rewriting it exclusive for Cocoa should help the speed since all of the legacy code wil be gone finally. Of course for me Photoshop 7 doesn't really blow now...Illustrator is a slug...but Photoshop is fairly snappy on my Dualie 1 Gig.