Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

-hh

macrumors 68030
Original poster
Jul 17, 2001
2,552
340
NJ Highlands, Earth
Normally, we hear of Uncle Steve talking about Photoshop filters and how Mac's are still doing respectably versus PC's.

Here's a different set of benchmarks that have been charted. Note how everything that's outperforming the Pentium chip is doing it at no more than half the clockspeed :D


axisymmetric_lsdyna.jpg



Question is: does anyone have any info on how the Mac might perform here?

The specific application is LsDyna, and the file size addressing required is in excess of 8GB, which pretty much cuts out all 32bit systems.

With the impeding 970 and 64-bit addressing, Apple could be a player, if they ran LsDya. They would also need to announce "real soon", since the individual who did this research to put this together is looking at dropping $25K for hardware very soon - call it before the end of May.


-hh
 
Originally posted by maradong
Hm which kind of test is this benchmark actually running?

I can't release a lot of the information (job related). The benchmark isn't a benchmark in the conventional sense, but rather, the actual application that we need to run with a stripped down data set that will run in less than a day (for reference its ~3000 elements and slightly more nodes, running at 2 degrees of freedom for 0.2 sec model time in LsDyna).

Apparently, FEA appliations are very model-to-model dependent, so there is no obvious "best" computer for all FEA.

Since the business need is to run this specific model as fast as we can, the approach is a very simple one: take the model, go test it on iron we can beg/borrow/steal to see how fast it runs. From there, go buy the fastest iron for this benchmark we can afford.

What's mac-relevant is that LyDyna should be able to run on a Mac due to its Unix underpinnings. I'm wondering if anyone has done this, and if they have any relative benchmark-like insight. If it looks promising, I can probably arrange to set up a test, and if it does better than the HP, then we'll fight the fight to buy a Mac to do the job.

This does, however, assume that OS X can (or will very soon) be able to address filesizes greater than 8GB.


-hh
 
Basically, it seems that he's digging for some ammo to use against pentiums by comparing some wierd obscure benchmark that 99% of computer users don't use to prove that G4s and the upcoming PPC970 are faster.:D
 
Originally posted by -hh
What's mac-relevant is that LyDyna should be able to run on a Mac due to its Unix underpinnings. I'm wondering if anyone has done this, and if they have any relative benchmark-like insight. If it looks promising, I can probably arrange to set up a test, and if it does better than the HP, then we'll fight the fight to buy a Mac to do the job.

I'm a Unix programmer, and I sometimes get to buy hardware like this (just spent $70000 on a Sun machine).

Forget about Apple for this - they just don't compete in this marketplace. The test graph you give don't really tell us anything. For modelling, it will be the memory and disk IO subsystems that really get hammered - and this is why the more specialist manufacturers do well. CPU speed is important, but it's just part of the equation.

If you tried a 2 CPU xserve, I would bet that you'd get 2 days or more (if the benchmark ran). The G4 SpecFp is worse MHz for MHz than the Pentium (300 at 1GHz vs about 370 at 1GHz for a P4 if it went that slowly) - ie MHz myth is in Intels favour. The only way the G4 competes at present is on Altivec based code, and you probably wouldn't have the facilities to optimise this app to benefit. Unlike the xserve, the Intel machine will probably be running on SCSI raid too - much faster than IDE raid.

Buy the HP or SGI. If your problem gets bigger, you'll probably be able to expand both beyond 8G.

This does, however, assume that OS X can (or will very soon) be able to address filesizes greater than 8GB.

Well that's a mighty big assumption. Current rumors are an announcement at the tail end of June, so that's not within your timetable.

I kind of suspect that you don't have a big part in this decision process. I think the guys who use this for their work would have a good laugh at you if you suggested your idea to them...
 
hh
Question is: does anyone have any info on how the Mac might perform here?

I can't release a lot of the information (job related). The benchmark isn't a benchmark in the conventional sense, but rather, the actual application that we need to run with a stripped down data set that will run in less than a day (for reference its ~3000 elements and slightly more nodes, running at 2 degrees of freedom for 0.2 sec model time in LsDyna).

Again, with out more information it would be hard to find you any specific data.

If you have the money to access a SGI or a Cray then stick with the big guns.
If your trying to come up with some sort of networked distributed computing project, then you might try to pursue finding out how a Mac would fair in this kind of benchmark testing.

Your post is interesting, but hard for us to relate to. Every computer has it purpose, even if it is to satiate the needs of the everyday sheep.
 
Originally posted by firestarter
I'm a Unix programmer, and I sometimes get to buy hardware like this (just spent $70000 on a Sun machine).

Forget about Apple for this - they just don't compete in this marketplace.

About what I suspected. Still, it would have been nice to learn if anyone was running LsDyna on a Mac and how well it ran (even if it isn't the best solution)...since it has a Unix implimentation, it would appear to be possible.


I kind of suspect that you don't have a big part in this decision process. I think the guys who use this for their work would have a good laugh at you if you suggested your idea to them...

Asking about the Mac is part of a long dispute with management on IS policy. I wouldn't even suggest consideing a Mac unless I had a benchmark in my pocket to back up the suggestion as a potentially good idea. I don't, so that's not to be; "oh well..."

The long dispute here goes back several years ago to when we were mandated to standardize on Windows for the desktop, thus banned all Apple's. This mandate was opposed under the position that while standardization is fine (if it actually saves money - they've never proved it), but such a decsion has to be balanced in context of using the right tool for the job. This very reasonable position was overruled, and those of us who watch our IS overhead support costs have noticed that they've gone up every year since, but management doesn't want to admit that they might have made a mistake.


The good news here is that the Windows PC lost the benchmark test, and since the differences in cost are easily justified by the runtimes (translates to labor), management's Windows standardization mandate is about to be overturned: the old obstructionist is going to be eating crow.

That's a win as it stands. But if the winner could have been a Mac would have been gravy ... pure poetic justice.


FWIW, we've also pursued time on a Cray cluster - - problem is that while the job runs fast, the runtime doesn't drastically improve, due to file transmission times and que schedule delays.


-hh
 
Originally posted by -hh
FWIW, we've also pursued time on a Cray cluster - - problem is that while the job runs fast, the runtime doesn't drastically improve, due to file transmission times and que schedule delays.


-hh

"Supercomputing turns algorithms into IO problems"
Anonymous

My Mom worked for USGS and did groundwater modeling on the airforce weapons lab CRAY. Apparently it had two mainframes handling IO for it:eek:
 
Originally posted by -hh
About what I suspected. Still, it would have been nice to learn if anyone was running LsDyna on a Mac and how well it ran (even if it isn't the best solution)...since it has a Unix implimentation, it would appear to be possible.
I'm sure it would be possible, but if it was a lot of work to get running, then the programmer/sysadmin expense to do this wouldn't be justified. For this sort of thing the G4 isn't the ideal processor either.


The long dispute here goes back several years ago to when we were mandated to standardize on Windows for the desktop, thus banned all Apple's.
That sucks. Sorry to hear it.


This mandate was opposed under the position that while standardization is fine (if it actually saves money - they've never proved it), but such a decsion has to be balanced in context of using the right tool for the job. This very reasonable position was overruled, and those of us who watch our IS overhead support costs have noticed that they've gone up every year since, but management doesn't want to admit that they might have made a mistake.
Without knowing about the organisation size it's difficult to comment. No doubt - windows does take some maintenance, but an organised IS department should be able to standardise maintenance and machine builds to cut that down.


The good news here is that the Windows PC lost the benchmark test, and since the differences in cost are easily justified by the runtimes (translates to labor), management's Windows standardization mandate is about to be overturned: the old obstructionist is going to be eating crow.
OK - but it's going to cost you $$$ to get some Unix sysadmin skills in to run this thing if you're going non-windows.

Interesting that the Pentium system was running Windows. I quite like Intel's stuff for Linux use. Unlike most on this forum, I actually like Intel's stuff, but have a hard time with Microsoft. Maybe it's because my background is as a hardware electronics engineer... and Intel does do some good stuff.


That's a win as it stands. But if the winner could have been a Mac would have been gravy ... pure poetic justice.
True. Apple seems to be all about niche computing at the moment. The Xserve machines are nice - but it's not clear where Apple are going to take them. I think that they only really have their sights on file and web serving, not the heavy duty processing you have in mind.


FWIW, we've also pursued time on a Cray cluster - - problem is that while the job runs fast, the runtime doesn't drastically improve, due to file transmission times and que schedule delays.
Which goes to prove that you have to have an open mind with engineering problems like this. The 'obvious' solution isn't always the best, you have to pick a solution that suits your problem. Frankly, I'd go with whatever is most cost effective - even if its Intel.

If your management sucks, and you'd prefer to be working in a Mac shop - there are other options out there!!
 
Originally posted by firestarter
Without knowing about the organisation size it's difficult to comment. No doubt - windows does take some maintenance, but an organised IS department should be able to standardise maintenance and machine builds to cut that down.

The problem was in the hypocracy of the policy - - eliminate the Mac OS but keep a dozen flavors of Windows. At the time we were running 3.1, 95, NT 3, NT 4, 98, 98SE, & CE. Today we're still running 98SE, NT4, 2000, XP, & XP Pro that I know of, and there's no push to try to cut costs by eliminating half of these.


OK - but it's going to cost you $$$ to get some Unix sysadmin skills in to run this thing if you're going non-windows.

The basic admin isn't rocket science. The only thing that makes it hard is open standards network -unfriendly Microsoft garbage. For example, we have absolutely no reason to not stick with the simple and proven, like hardwiring static IP addresses to individual PC's. Instead, we have a complicated system that allows us to move nodes between buildings...but the problem is that because how we do our "Property Book" accounting of the hardware, its owned by the individual department and thus, not allowed to move between buildings without a ton of paperwork. As such, its a network feature that we never use.


The Xserve machines are nice - but it's not clear where Apple are going to take them. I think that they only really have their sights on file and web serving, not the heavy duty processing you have in mind.

FWIW, I did check back with the guy on the lead for this project - he did look into the Xserve's...his budget will support buying a cluster of ~20 of them.

Problem is that LsDyna FEA software has the old "too many flavors of Unix" problem, and doesn't run on Apple. Anyone have an Apple Rep who might be willing to look into it, in consideration of a possible 20 units XServe purchase?


If your management sucks, and you'd prefer to be working in a Mac shop - there are other options out there!!

This is only one factor and a relatively small one in the bigger picture. Overall, my biggest complaint is in the overall IS strategies that I see: the one that concerns me the most is how we're driving towards an "all eggs in one basket" single-point-of-vulernabilty (or attack) failure opportunities. How quickly they forget that when the Mellissa virus hit in 2000, my Mac was the only machine out of ~600 in our building that could print anything for a week.

When it comes to IS management, homogeneity does offer cost savings potential - - but the trade-off is that it comes at the expense that when things go wrong, the entire system goes down. Heterogenity in OS's and App's is a good thing for operational robustness.


-hh
 
Um exactly what chip is that

Judging from the companies, speeds, and configuration I believe those are server cpus, being benched against a older Pentium 4. It appears that their is at least one Itanium 2 systems , perhaps a HP RISC and MIPS system also. Windows has limits on addressing but PAE allows for addressing of up to 36 GBs so 32 bit CPUs can do decently on a test like this as long as it fits into the 32 bit computing environment. That probably explains why the P4 system did so well on this particular test. Not sure though how well a G4 equipped Powermac would perform on this kind of test.
 
Interesting that the Pentium system was running Windows. I quite like Intel's stuff for Linux use. Unlike most on this forum, I actually like Intel's stuff, but have a hard time with Microsoft. Maybe it's because my background is as a hardware electronics engineer... and Intel does do some good stuff.

I personally think that Intel has been a great innovator, but at the same time, some of their efforts are......guided towards selling, and what sells? Ghz. Thus, they have made sacrifices in some of the advancements they could have made, probably. I think the neatest thing they are working on right now is their Hyperthreading technology. A great concept, but sadly, it is sort of like a CoOperative Multitasking OS in that it doesn't put enough limits on the resources a thread takes up. I think it will be interesting where they go with it. The trace cache is also a neat innovation. Now, if we could only get their focus off of just improving the FSB and maybe improve the entire system bus, that would be nice;). Anyhoo. I don't like Intel merely b/c of their marketing and their focus on the profit v.s. making a better CPU. lol. Then again, I can't say I blame them. I would be focusing on the same things in their shoes. Gotta keep the stock-holders happy.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.