Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Originally posted by barkmonster
I read something about the SPEC benchmarks before.

I can't remember where or the exact wording of what I read but I read that SPEC benchmarks are extremely suspect.

Yeah, and I'll bet you anything that what you read came from a poster who was a Mac zealot. :)
A few reasons spring to mind.

Compiler Optimisation - obviously this is already mentioned.

SPEC is a test designed to measure system speed across platforms. It is written in a standard language - C. It is actually not a single test, but many smaller, separate tests comprising one result. The tests are designed to accurately reflect the real-world tasks a CPU is required to accomplish.

System vendors compile the SPEC test suite with whichever compiler they choose. They are not allowed to modify the test suite. They then run the test suite, and the test suite outputs the result. That's all there is to it. The result is a function of CPU speed and compiler efficiency. It provides the most accurate reflection yet devised of application speed across platforms.

SPEC is the most widely-accepted benchmark there is, adopted by EVERY major computer company EXCEPT Apple - HP, Sun, IBM, Compaq, Dell, Intel, SGI - you name it. (Why Apple refuses to publish SPEC scores and instead chooses to publish ludicrous Photoshop "benchmarks" is anyone's guess, but the real reason - that the G4 performs downright embarrassingly in SPEC_CPU2000 - seems pretty obvious to me.)
Perfect Instructions - This is the big one, this is the one that means that the branch predictor handling that 19 stage pipeline on a pentium 4 NEVER misses an instruction! Hardly realworld.

But it IS real-world, because it is a testament to the EXCELLENT compiler Intel has written for the P4. Intel was able to write a compiler that takes an ordinary task and lays it out for the execution unit just perfectly.

By contrast, the PPC compiler used in the SPEC test suites is complete sh*te. You say this isn't fair, but it IS fair, because real-world performance is a function of not only theoretical CPU performance but the code generation of the compiler. The only way this wouldn't be the case is if all developers wrote their software in assembly only. So Apple can go on all it wants about its gigaflops and its "Pentium 4-crushing" performance and so on, but the fact will remain that all that is little more than marketing drivel describing the peak theoretical performance that is not even close to being accurate in reality.

No, the SPEC test (on the benchmarks I've seen) does not account for AltiVec. This is because the SPEC test suite is written in platform-independent code that is not optimized for ANY individual processor. Intel is lucky that is has a great compiler that can optimize for the P4's SIMD units automatically. Once Apple incorporates a compiler capable of auto-vectorizing into OS X (presumably GCC 3.x), the FP side of the results will improve quite a bit. But that hasn't happened yet. (They will need a 4-fold FP performance improvement to even come close to the 2.5GHz P4.) And keep in mind AltiVec is not double-precision, or at least it won't be until the G5+ gets here.
Small Instructions - Helpfully for those CPUs with tiny amounts of cache RAM, the instructions used in the SPEC benchmark are designed to fit in those small amounts of cache, eliminating the bottleneck of RAM while giving another misleadingly high score to inefficiently designed cpus.

This is simply not true. The whole point of SPEC is to accurately measure real-world performance across platforms. If you've got a problem with the way the G4 performs and you think the Pentium 4's results are inaccurate, then please explain why the 1GHz G4 gets beaten badly not just by the Pentium 4, but also by such chips as:

- The 400MHz MIPS R12000
- The 500MHz MIPS R14000
- The 750MHz PA-8700
- The 833MHz Alpha 21264B

All of which have huge secondary and tertiary caches.
This means in effect that SPEC benchmarks are about as realworld as throwing 2 computers off a bridge and the one that sinks first is the winner.

*cough*

In reality the Pentium 4 is held back by the lack of L3 cache, the lack of a barrel shifter (reverses numbers in 1 instruction, even the 386 had one!), the huge pipeline means that even with a highly efficient branch predictor the pentium 4 is spending more time waiting for instructions that executing them. I've generalised on this last paragraph by the way, some of the info is from emulators.com

You almost sound as if you actually know what you're talking about here.

Alex
 
I still think everyone misses the point. I have DP 450, and god dang it, it may have a gig of RAM, but PC100 RAM, but that's it really. And the fact of the matter is, I am much more PRODUCTIVE than my friend with an Athlon XP 2200+, yes, he can do SETI units in 4 hours, woopie. I can run Photoshop, iTunes, Word, Excel, Mathematica, QT, Final Cut Pro, After Effects, and Illustrator at once and use them all whenever I want by simply holding down the Apple Key and hitting tab. Yeah, 4 hours of video in FCP might take 8 hours to render, but that's why I have a dog and mt. bike (and sleep/night time for that matter) Now given, I do not run a film shop, so time is not really that important, but with the new machines, it shouldn't be a problem. I read this book once, and it interviewed this guy who used to design the processors for Cray (I hope you all know awhat that is, if you don't, well, LOOK IT UP :D No, for perspective, what would take any given PC system/cluster of home PCs a year to complete, the CRAY can do in .0003 of a second) and everyday he said he was asked to increase it by a millisecond, then a picosecond, then a nanosecond and soon enough he said exactly what I'm saying, that everyone is missing the point, that the gov't or whoever always wanted faster faster faster machines, but forgot what they could already do with current technology. The fact is, Joe Shmoe wants a fancy typewriter, that's it, a freaking 800 MHz G4 can handle that folks. It can handle MUCH MORE than that, as for what he thinks of when he buys it, like price vs. "performance", leave that up to the Apple Store employees to work on.
 
One last little rant here - A) Im sooo sick of this discussion I could easily cough up my liver at this moment and B) Go here, and look at the FIRST TEST, it's BLAST folks straight up calculation, it did it in minutes compared to HOURS, I'm sure the scientists are *really* looking for Mhz/GHz when they see that :rolleyes: - http://www.apple.com/xserve/performance.html
 
Originally posted by iPat
...Don't know if it's even possible but would be interesting to get a sense (right now) if DDR and ATA-100 is going to make that much of a difference??...

Don't know if this helps or not but we have a webobjects app running on an XServe that was running on a desktop style Mac server before. We're using the same RAID system so there is no difference there. We cache key portions of the database in RAM in improve performance so our app gives a pretty good indication of performance change due to DDR. We are seeing response times on the XServe generally in the area of 12% - 15% of the desktop style system.
 
Eugenia Loli-Queru is actually not a guy.

I was kidding. Didn't even notice "Eugenia" hmmm I dig smart women..have to check her views out a little more.

What do you mean by this?

Good Lordy what was I thinking? I meant to say that Altivec was important for the processing of some items that lend themselves to vector processing. I also meant to say that ,despite rumors that IBM wasn't enthralled by altivec and preferred higher clockrates, Altivec does not impede motorolas attempts to clock the G4 higher as much as some would believe.



Why should developers spend precious time re-tooling large amounts of their code so that it will run optimally on a platform used by 3.5% of the desktop market when they can write the same code in a manner that is both portable and platform-independent and have it run wicked-fast on what 95%+ of the mainstream desktop computing market uses?

There are a few discrepancies here. Altivec really doesn't require large amounts of retooling. As for marketshare Apple commands a larger % in certain genres of apps like Graphics. These Developers would be well suited to adding Altivec. Portable code is nice but within that %95 of (assumed Windows) of computer we are talking computers all the way back to windows 3.1 or DOS. Then again perhaps these old machines cannot be considered "Mainstream". I'm not begging the Developers to do it but the market is competitive and anything you can do to give yourself an advantage is warranted.

I totally agree on the compilers and Apple/Moto's support(or lack thereof) of the Developer community. I believe that Moto has this final chance to get the processor right. I'm remaining optimistic.
 
Yup, BLAST is known to be one of the best examples of the G4's ideal performance. But if unless Apple wants to become a vendor of BLAST, RC5, and Photoshop filter applicance boxes, it should really start looking into improving the overall performance of its systems.
Originally posted by shadowfax0
I still think everyone misses the point. I have DP 450, and god dang it, it may have a gig of RAM, but PC100 RAM, but that's it really. And the fact of the matter is, I am much more PRODUCTIVE than my friend with an Athlon XP 2200+, yes, he can do SETI units in 4 hours, woopie. I can run Photoshop, iTunes, Word, Excel, Mathematica, QT, Final Cut Pro, After Effects, and Illustrator at once and use them all whenever I want by simply holding down the Apple Key and hitting tab.

That's good, but productivity is a different topic entirely. I was under the impression that we were talking about benchmarks and real-world performance here.
Yeah, 4 hours of video in FCP might take 8 hours to render, but that's why I have a dog and mt. bike (and sleep/night time for that matter)

Sounds like you're shifting the argument away from performance, into "what would I need that great performance for anyway" territory.
Now given, I do not run a film shop, so time is not really that important, but with the new machines, it shouldn't be a problem.

I hope so.
I read this book once, and it interviewed this guy who used to design the processors for Cray (I hope you all know awhat that is, if you don't, well, LOOK IT UP :D No, for perspective, what would take any given PC system/cluster of home PCs a year to complete, the CRAY can do in .0003 of a second) and everyday he said he was asked to increase it by a millisecond, then a picosecond, then a nanosecond and soon enough he said exactly what I'm saying, that everyone is missing the point, that the gov't or whoever always wanted faster faster faster machines, but forgot what they could already do with current technology. The fact is, Joe Shmoe wants a fancy typewriter, that's it, a freaking 800 MHz G4 can handle that folks. It can handle MUCH MORE than that,

It could handle much, much, much more than that if Apple would actually spend a little effort optimizing its OS and apps, but Jaguar will supposedly make lots of improvements in this area. But I think the G4 is a quite good CPU, it's just that the crap compilers and the slow-ass OS weighs it down severely. Apple should look to BeOS or AmigaOS for examples of OS efficiency done right. If they did that, they could squeeze better performance out of their chips, and maybe actually do well enough in SPEC to want to start publishing results again.


Alex
 
No no, you see were *ARE* talking about performance here, if I am more productive, then the comapny or whoever PERFORMS better, what I'm trying ot say is that it's not important how fast your CPU is, it matters whether you can use it, and so far with Windows, you can't really use the speed. And if you really want to get serious, take the UltraSPARC III, runs SOlaris, ALOT is done on those processors, they're not especially fast, a G4 vs. ONE UltraSPARC, the G4 would win, yous ee, Sun is jsut *really* good at cramming alot in there and making them work together under Solaris. If you want performance, go buy a Sun workstation, and then go out and buy yourself an Athlon XP 2200+, see which one is faster for some benchmark, then see how much is actually done on either one, that's a real performance test in my opinion. And go watch the weather channel, they use two CRAYs from NOAA so they can tell you how the weather is.
 
Originally posted by nuckinfutz
There are a few discrepancies here. Altivec really doesn't require large amounts of retooling.

I suppose it depends on the code that needs to be re-tooled, but in any event it does require forking the code, maintaining two separate codebases - one for straight-up FP, and one for vectorized FP. And although the adjustment is probably not difficult by the definition of the word, with a one-million-line-of-code program, it can take several people months.
As for marketshare Apple commands a larger % in certain genres of apps like Graphics. These Developers would be well suited to adding Altivec.

I agree that AltiVec can be attractive for developers who publish software used in Apple's niches (Digidesign, Genetech, Adobe, Apple itself, etc.), but I don't think it's realistic to expect all developers to optimize all their software for AltiVec, as if they have some sort of responsibility to the Mac community.
Developers to do it but the market is competitive and anything you can do to give yourself an advantage is warranted.

I think though that, outside Apple's niches, developers can give themselves advantage on the Mac by adopting AltiVec at their own peril. Reason being, if they spend more effort developing two separate codebases optimized for two incompatible platforms while their Windows-only competitors are able to deliver competing products to the same market faster and cheaper (because they don't have to pay anyone to port to an obscure platform), the Mac loyalist developers might just find themselves the loser in that scenario, even though they do own the Mac market while the other company doesn't. This whole AltiVec scenario would work a lot better for Apple if Apple had some solid mainstream leverage, like a 20% market share.

I hope GCC 3.1 improves this whole situation. I've heard reports of code performance gains well into the double-digits on PPC over the 2.95.2 compiler, and supposedly it is auto-vectorizing, although maybe I'm wrong about that. I think Apple is headed in the right direction here, although they're definitely not there yet. :)

Alex
 
Originally posted by shadowfax0
No no, you see were *ARE* talking about performance here, if I am more productive, then the comapny or whoever PERFORMS better, what I'm trying ot say is that it's not important how fast your CPU is, it matters whether you can use it, and so far with Windows, you can't really use the speed.

I agree with what you said about productivity, but I think a LOT of people would disagree that "you can't really use the speed" with Windows. Many people are attracted to the Mac every day, just as many Mac users jump ship because their machines just aren't keeping up, and they find that they can be more productive with a fast Windows PC. I don't like Windows either, but I find the argument that you can't be more productive with it hard to swallow, generally.
And if you really want to get serious, take the UltraSPARC III, runs SOlaris, ALOT is done on those processors, they're not especially fast, a G4 vs. ONE UltraSPARC, the G4 would win, yous ee, Sun is jsut *really* good at cramming alot in there and making them work together under Solaris.

What do you mean by "cramming a lot in there"? Suns are nice machines because they have a good balanced system design, something that the Mac needs badly. And a 1.05GHz USIII would beat a 1GHz G4 at least twice over at non-AltiVec tasks.
If you want performance, go buy a Sun workstation, and then go out and buy yourself an Athlon XP 2200+, see which one is faster for some benchmark, then see how much is actually done on either one, that's a real performance test in my opinion.

I don't see what you're getting at. If I'm a Photoshop user, I'm literally infinitely more productive on the Athlon because there is no Photoshop for Solaris. Specialized tasks are one thing - sure, every platform has its niche, its area where it really shines, Apple included. But generally, Apple is quite far behind the rest of the world at the moment and will continue to be behind unless the G5+ arrives in a Mac soon... Whether or not productivity is factored in.

Alex
 
Re: U Failed to see my point

Originally posted by nuckinfutz

I wasn't specifically talking about you or your statements. Many people seem to have this misconception that Apple is closed...sure they don't have clones but as far as Hardware and Software they're as open today as they can be for the size of their market. As far as what consumers look for I have enough experience with that selling both Macs and PC's. Consumers aren't fanatics like some of us...they have some misconceptions you just have to find out what is best for them. Sometimes Megahertz is it....some would be best off with a stronger focus on ease of use. Idiot? Not me....i'm a little arrogant but no idiot.

No harm done then.
 
vector/matrix math

Alex_ant makes a great point about AltiVec. Let me paraphrase, because compilers for Apple/G4 do not auto-vectorize code, one has to write their code in whatever language using one or more of the 162 matrix functions that were created.

Here's a good link with examples; its a brief read and well worth a few minutes of effort:

O'Reilly Article on AltiVec by a NASA Langely Engineer

For your convenience, here is one of the examples from the two page article:


y = ax + b

where x and y are vectors and a and b are constants. In this operation, x is scaled by a factor of a and then shifted by b. With scalar computations, this can be implemented as:


for (i=1; i<=n; i++) { y=alpha*x+beta; }

where x, y, alpha and beta are defined as floats. Using AltiVec’s vector multiply-add instruction, the same operation can be written as:

for (i=1; i<=n/4; i++) {
y=vec_madd(alphaV,x,betaV);
}


Here, x and y are vector floats, as are alphaV and betaV:

alphaV = (alpha, alpha, alpha, alpha)
betaV = (beta, beta, beta, beta)


Now, for me, I'm certainly no stranger to vector/matrix arithmetic. Unfortunately, one of the greatest crimes regarding education in America today is the failure to teach vector/matrix operations such that most programmers/engineers are comfortable using them whereas most people tend to resort to scalar operations instead.

However, that said, look at the example above (in dark blue), the nomenclature that was created for the vector/matrix operations is awkward and counter-intuitive, even for the vector/matrix-inclined folk. In the example, the scalar elements of the simple equation

[Y] = a [X] + b,

had to be written as vectors, note "a" is now:

[a] = (a, a, a, a).


And still worse, one had to express the expression within the confines of some confusing function "vector_madd" where the elements of the expression are listed in the function rather than expressed as an arithmetic expression. Now, I don't know what the actual nomenclature for "vector_madd" is. That is, what exactly are the parameters of this "vector_madd" function. Nonetheless, it reminds me of those nasty and confusing functions in Microsoft Excel where one is forced to think differently [BTW, M$'s definition of 'think differently' is translated into 'think counter-intuitively'.]. As maddening as this can be in Excel, it is a spreadsheet afterall and is thus forgiveable. But, in a programming language, I find this astonishing!!!

I find it a little ironic that the function is called "vector_madd", note the "m" in there instead of just 'vector_add'. Klingon bastards!

So, not only does one have to be comfortable with vector/matrix operations to lever AltiVec, but one has to employ an awkward nomenclature as well.

Now, if one could simply define vectors and matrices as one defines variables and arrays and then simply employ an intuitive nomenclature, levering AltiVec would be significantly easier. Bear in mind, the above example was for an incredibly simple expression. I don't imagine people writing code for 3D apps employ algorithms that are nearly so simple.

Alex_Ant mentioned that newer compilers may include auto-vectoring. I'm not sure what auto-vectoring really means, however. I'm assuming that the compiler would look for nested operations and compile them for the CPU as matrix operations.

With the above example, I'm sure some of you can better appreciate the value that would provide. Although, I'm a little concerned as to how relliable such a compiler would interpret (no pun intended) a complex algorithm with all kinds of nested operations. I'm not saying it wouldn't be, I'm just expressing some concern.

I suspect, that a more intuitive nomenclature in the programming language for vector/matrix operations would be superior to auto-vectoring because many a bug have been born whilst writing scaler operations such as:


for (i=1; i<=n; i++) {
for (j=1; j<=n; j++) {
for (k=1; k<=n; k++) {
c[i,j]=c[i,j]+a[i,k]*b[k,j];
}
}
}


Also, if the languages employed a more intuitive nomenclature for vector/matrix operations, and if programmers were comfortable with vector/matrix operations, not only would AltiVec be more utilized, but software code could be simpler and more succinct. [There must be a rationale for the languages not employing a simpler, superior nomenclature for matrix operations!?!]

So, if one could just code matrix operations such as:

matrix[Y] = a * matrix[X] + b;

then levering AltiVec would not only be easier but it could also be easier than employing present day scalar operations techniques.

BTW, I haven't written any code in over ten years, so please be kind if I've written something out of whack.
 
I think we can all agree we love the Mac OS. Why else would we be here? And we hate the way M$ does business.

But a lot of us loyalists, who once touted the superiority of the hardware, software, and the company in general, seem to be waivering. Apple isn't the only one running a business in an "economic slowdown". Most of us are at least budgeting, and need the most bang for our bucks.

Sorry to tell you, but Wintels are the competition. If Apple wants to win consumers, and keep us non-zealot supporters, they need to wake up. If Motorola is lagging, find other ways to make things look better. Faster FSB, memory, IDE, overall system improvements. The cases are neat and all, but what about what's inside? The software is Fantastic, give us the hardware to prove it. We all dislike Windows.

And M$. So why conduct business like they do? I support them buying up other companies to innovate (Apple, not M$), and giving us new software solutions, but at what costs. Should we just accept it, because it's the lesser of 2 evils? Should we just say, "well that's the fastest Mac there is, so I'll buy it". Or "it looks so stylish, who cares if it's still using last years technology". Why become complacent with the obsolesencs?

Should we have to over pay for things that others offer for free, or at least cheaper? People are mad because they feel extorted, and dammit, they have every right to be pissed. They paid extra because they thought they were getting more. Now they're getting the old "bait and switch". The extras of .Mac are worth $100/year, if they work. But what about the "free for life" e-mail. Or paying $1,000 for a new OS X.2 Server Liscense 2 months after buying a $4,000+ xServe? Or even FULL PRICE for an upgrade. They want us to buy hardware before an Expo, then punish us if we do.

See if anybody buys a new Mac after they announce 10.3, but before they deliver it. Next it will be XP type registration practices. Isn't this why we're trying to move away M$. Correct me if I'm wrong, but you don't get new customers by p*ssing people off, and you lose those that used to support you. How many of us used to rave to our friends, family, and co-workers about Apples? How many of us now think twice before recommending them? They want to switch people, give them a better reason to want to switch.

They want to move into the Pro realm, give us Pro hardware. Who cares if you have a 1.2 GHz CPU, if it's surrounded by ATA/133 (which really is better than ATA/66), 120 GB+ Hard Drives, PC2700 DDR, 166 FSB (x2), 2 CD Drives, USB 2, built-in Bluetooth, etc. It matters now. You can do it. How many of us use GB Ethernet, or used USB or FW when they first came out? You can't use modern day specs!?!

You want to charge us more, give us more. All you zealots can flame away. "Apple good, Wintel bad, must hide head in sand". You should see what the other side is saying, I'm a zealot to them. But I'm just trying to be a realist here. I want a new Mac, but nothing on the current roster suits my needs. I'm not paying $2,000 for style. I can't afford to. Not many can (and if you can, lucky you). Mr. Jobs, give me something fairly decent and I'll take a (slight) performance hit for the extra stability and ease of use.

It's my $$$, I'll take it where I think it would serve me best. And telling people to just go buy a PC doesn't help. Actually it just proves my point. Because that's exactly what people do. This isn't a private club, it's a business claiming to want new customers, and in doing so doesn't seem to be catering to it's current base. If anything, it's p*ssing off it's most important clientel by making a lot of really bad choices. We're not happy, and we, the customer, are what matter.

Voice your opinions people. And b*tch all you want, until someone listens. Because no, it's not "good enough".
 
Originally posted by solvs
I think we can all agree we love the Mac OS. Why else would we be here? And we hate the way M$ does business.

But a lot of us loyalists, who once touted the superiority of the hardware, software, and the company in general, seem to be waivering. Apple isn't the only one running a business in an "economic slowdown". Most of us are at least budgeting, and need the most bang for our bucks.

Sorry to tell you, but Wintels are the competition. If Apple wants to win consumers, and keep us non-zealot supporters, they need to wake up. If Motorola is lagging, find other ways to make things look better. Faster FSB, memory, IDE, overall system improvements. The cases are neat and all, but what about what's inside? The software is Fantastic, give us the hardware to prove it. We all dislike Windows.

And M$. So why conduct business like they do? I support them buying up other companies to innovate (Apple, not M$), and giving us new software solutions, but at what costs. Should we just accept it, because it's the lesser of 2 evils? Should we just say, "well that's the fastest Mac there is, so I'll buy it". Or "it looks so stylish, who cares if it's still using last years technology". Why become complacent with the obsolesencs?

Should we have to over pay for things that others offer for free, or at least cheaper? People are mad because they feel extorted, and dammit, they have every right to be pissed. They paid extra because they thought they were getting more. Now they're getting the old "bait and switch". The extras of .Mac are worth $100/year, if they work. But what about the "free for life" e-mail. Or paying $1,000 for a new OS X.2 Server Liscense 2 months after buying a $4,000+ xServe? Or even FULL PRICE for an upgrade. They want us to buy hardware before an Expo, then punish us if we do.

See if anybody buys a new Mac after they announce 10.3, but before they deliver it. Next it will be XP type registration practices. Isn't this why we're trying to move away M$. Correct me if I'm wrong, but you don't get new customers by p*ssing people off, and you lose those that used to support you. How many of us used to rave to our friends, family, and co-workers about Apples? How many of us now think twice before recommending them? They want to switch people, give them a better reason to want to switch.

They want to move into the Pro realm, give us Pro hardware. Who cares if you have a 1.2 GHz CPU, if it's surrounded by ATA/133 (which really is better than ATA/66), 120 GB+ Hard Drives, PC2700 DDR, 166 FSB (x2), 2 CD Drives, USB 2, built-in Bluetooth, etc. It matters now. You can do it. How many of us use GB Ethernet, or used USB or FW when they first came out? You can't use modern day specs!?!

You want to charge us more, give us more. All you zealots can flame away. "Apple good, Wintel bad, must hide head in sand". You should see what the other side is saying, I'm a zealot to them. But I'm just trying to be a realist here. I want a new Mac, but nothing on the current roster suits my needs. I'm not paying $2,000 for style. I can't afford to. Not many can (and if you can, lucky you). Mr. Jobs, give me something fairly decent and I'll take a (slight) performance hit for the extra stability and ease of use.

It's my $$$, I'll take it where I think it would serve me best. And telling people to just go buy a PC doesn't help. Actually it just proves my point. Because that's exactly what people do. This isn't a private club, it's a business claiming to want new customers, and in doing so doesn't seem to be catering to it's current base. If anything, it's p*ssing off it's most important clientel by making a lot of really bad choices. We're not happy, and we, the customer, are what matter.

Voice your opinions people. And b*tch all you want, until someone listens. Because no, it's not "good enough".

I totaly agree with you, I tired to listen those "it will be good enough". Damn where is the WOOW effect of new apple hardware as it used to be? maybe lost with the drop of good hardware to cheaper one (SCSI for ATA, still USB 1, RAM, slower CPU compare to other wich was the invert before...) and no, no, no price down! no still pay about the same fu... price but you downgrade the component into Mac! I don't call this evolution! it's downgrading to make more money, I know they need it but I don't like to throw my money into fire. Give us decend Hardware to go with those magical software (yes I'm on a Mac only for the soft and the OS, the hardware su.k a lot, I really have to love the OS and Yes I found OS X amazing), if you don't want to do top edge hardware, ok, but revise your damn price then! don't call me back about the price of better component please, most Mac today are built with fu..ing PC hardware! you can flash a video card to get it work on your mac!
 
to Electric Image

I think that this is a great example of AltiVec non accaptance. Electic Image doesn't support it. And it is painfully slow on a Mac, Yet my college keeps using Electric Image Universe for it animation classes.
The big question is why isn't it AltiVec optimized:mad: :(
If this optimization is as easy as some people say why haven't Electric Image done it yet.

I have spent countless hours on rendering in Electric Image. Thinking that it would have been much easier to meet deadlines if only Electric Image haven't been so Lazily Useless is just infuriating.

Can Electic Image please give an explanation on the matter.
:confused:
 
Originally posted by shadowfax0
One last little rant here - A) Im sooo sick of this discussion I could easily cough up my liver at this moment and B) Go here, and look at the FIRST TEST, it's BLAST folks straight up calculation, it did it in minutes compared to HOURS, I'm sure the scientists are *really* looking for Mhz/GHz when they see that :rolleyes: - http://www.apple.com/xserve/performance.html

Let's see Apple compared thier top of the line, newest Dual G4 to:

Dual P3's and an UltraSPARC2

Please correct me if I am wrong here, but these are at lease 1 year old systems next to very recently incarnated G4. P3 have no DDR, G4 does. Bus speed of the G4 is 133, P3 is 100(I think).

I'd love to see the comparison to a 1U Dual Athlon from Penguin Computing! Now there is a real comparison.

This artical is just Apple Marketing painting a pretty picture. I don't blame them, but I do prefer to empower the consumer with better information than that of the marketing dept. of the company trying to get your money.
 
Originally posted by shadowfax0
One last little rant here - A) Im sooo sick of this discussion I could easily cough up my liver at this moment a

Only windows users can cough up their liver, since they have to drink heavily to tolerate their OS...
 
Re: to Electric Image

Originally posted by elensil
If this optimization is as easy as some people say why haven't Electric Image done it yet.

If EI is similar to other 3D packages, most of the math is done in integer. I know that infini-d (a speular package from way back) which I used heavily didn't use altivec for that reason. Their rendering loop was hand tuned assembly, from what I understand, and it didn't pay to altivec it according to the specular team (when it still existed, sigh...)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.