Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
What’s funny is that when we are designing processors, we don’t use *any* of these benchmarks when making our design decisions. I always find it amusing how people obsess about these things.

I would imagine you especially don’t run them in a vm. ;)

Out of curiosity, how did you judge the capabilities of a processor during design? Do you run SPEC? Or not even that - purely in house stuff?
 
I would imagine you especially don’t run them in a vm. ;)

Out of curiosity, how did you judge the capabilities of a processor during design? Do you run SPEC? Or not even that - purely in house stuff?

It varied. In general we were operating on the level of “it needs to increase IPC by x% and clock rate by y% vs our last design.” That was our starting point for most of the designs I worked on. We would also focus on specific capabilities - floating point needs to be faster by a certain percentage, or the integer multiplier needs to operate in 4 cycles instead of 5. Often times we’d look at certain benchmarks - booting windows, recalculating an excel spreadsheet, peforming a spice simulation - and analytically determine what we could do to improve the speed of those things. But it was along the lines of “this code has a lot of memory stores following conditional branches, what can we do to improve that” and not so concerned about trying to achieve any particular benchmark score. Depending on where I did the designs, we had a large array of small benchmarks that are common in the field, but not used by consumers. For my ph.d. research I must have run 10,000 benchmark simulations to figure out how to optimally size some caches. Again, those were certain academic benchmarks that were just intended to create certain types of patterns of memory accesses, and which academics use to compare their results to each other. Nobody thinks they mean much in the real world.
 
  • Like
Reactions: crazy dave
What’s funny is that when we are designing processors, we don’t use *any* of these benchmarks when making our design decisions. I always find it amusing how people obsess about these things.
Do you mind if I ask who "we" are? I am very curious. IF you want you can PM the answer if you do not want it known in the public.
 
That's just reality of M1. You give up compatibility with M1 and without native apps performance is even slower at nearly half of AMD. That's why my MBA M1 is used pretty much as an overpriced Chromebook.

Performance is NOT halved for apps running through Rosetta. Your one benchmark was more of a cherry-pick than any of the garbage Intel threw at the wall to prove their "superiority" (which is actually their inferiority complex bubbling to the surface). Your comparisons are so poorly set up and run that Stevie Wonder could see the flaws in them...
 
What’s funny is that when we are designing processors, we don’t use *any* of these benchmarks when making our design decisions. I always find it amusing how people obsess about these things.
Hit the nail on the head there brother.
 
Here’s a paper I co-wrote about one of my early chips that has a nice nexus to the Apple community :)
500nm process and 1600MIPS.... Not bad in 1997!

A little over 20 years later, the M1 is using a 5nm process and I've seen an estimate of 35,000 MIPS.
 
That's just reality of M1. You give up compatibility with M1 and without native apps performance is even slower at nearly half of AMD. That's why my MBA M1 is used pretty much as an overpriced Chromebook.
Then I would suggest that you run native apps....come on in, the water's lovely...
 
500nm process and 1600MIPS.... Not bad in 1997!

A little over 20 years later, the M1 is using a 5nm process and I've seen an estimate of 35,000 MIPS.
one fun thing about this chip, btw, is that the process size mattered a lot less than in CMOS chips (only our RAM structures were CMOS). Our transistors were gargantuan - we used BJTs for logic, not FETs. So the thickness of the layers, not the Lateral dimensions of the transistors, was what mattered as far as transistor toggle speed. Smallet process size mainly would have helped with respect to density - we could have gotten more transistors on the die and shrunk them, but each would still be many times the size of the minimum feature. Nowhere near the issue where you have to worry about quantum effects. And no “leakage,” anyway, because when you are using ECL or CML circuits your current is mostly static anyway. Not good for power consumption, though :)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.