Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hyperthreading is just there because x86 instructions are different length. By feeding 8 threads, it is much more likely that all 4 cores will be used close to the maximum. If you send 8 identical-length instructions to a hyper threaded x86 it will take twice as long as 4 identical-length instructions.

Not sure what you are trying to say. Hyperthreading requires the CPU to process more instructions, not less. If variable-length nature of the ISA is a problem, you'd think that the issue is instruction decoding. But hyperthreading puts more burden on decoding since you have to decode two streams of instructions instead of a single one. SMT's purpose it to improve the utilization of the backend, not reduce the burden on the front-end. And there are plenty of SMT implementations on fixed-width ISA (again, check out new Power CPUs which are radical SMT)

A 4 core, 8 thread CPU will not be as efficient as an 8 core, 8 thread CPU - using a piece of software that utilizes 8 cores. The hyperthreading uses context switching, sharing of cache. Hyperthreading ultimately allows running two things on the same core, but those two things are still on that one core. Whereas a full 8 core, 8 thread (non hyperthreaded) CPU will be better due to having 8 full cores available.

Its not appropriate to call a 4 core, 8 thread CPU as an 8 core CPU.

Cores are independent entities which are effectively mini-chips. Hardware threads are reusing the logic units within the same core during their idle period. This works for like workloads but when the pipeline needs to be flushed then everything stalls until the pipeline get filled again.

This is all true and yet it's exactly what I am talking about. Apple M1 might have 8 physical cores, but from the performance standpoint it behaves quite similarly to a quad-core CPU with SMT. Focusing on core count and whatnot in the context of CPU performance without discussion what the cores actually do is prone to create false expectations.



I had absolutely nothing to do with that chip. Aside from the fact that it was essentially designed by the Austin team (if i remember correctly - been a long time) (Sunnyvale did even-numbered chips, and Austin did odd), I achieved a lot of notoriety around here by predicting it would suck and explaining why, only to be proven right when the chip actually went on sale.

My time with AMD last through the hammers, and stopped when when we got to vehicular construction equipment

Sorry for spreading misinformation! I just saw your name mentioned as a production manager on this chip on a tech forum, but I should have known better than believing what people say on the internet :)
 
  • Like
Reactions: reallynotnick
My time with AMD last through the hammers, and stopped when when we got to vehicular construction equipment :)
By "vehicular construction equipment" are you referring to Geode? That purchase didn't make sense to me, much like the acquisition of Alchemy for PDAs.
 
Ah, I took that literally (embedded systems). I see my error now, thank you for the clarification.

I have to admit that I laughed about this exchange a bit. Imagine AMD-powered tractors... now with even more cores and advanced 3D cache!
 
I have to admit that I laughed about this exchange a bit. Imagine AMD-powered tractors... now with even more cores and advanced 3D cache!
I did too, after I realized my goof. As the saying goes, if you can't laugh at yourself, then somebody else is. I knew that @cmaier had left roughly around the time of the Geode acquisition from NatSemi. It should have been obvious to me that he was talking about the dumpster fire that was Bulldozer, but I had embedded chips stuck in my head. Hence, accidental hilarity ensued, and I'm oddly proud of that.
 
Well, the M1 is an entry-level chip, but today we are at the point where an entry-level chip performance is more than enough for 90% of people.

We all buy overpowered PCs for our needs.
I have a 10-year-old PC with a glorious Intel 3570K and a GTX 770 just because I don't want to pay Adobe a rent every year. It still runs CS6 like the first day. Rock-solid machine.

The M1 is an incredible entry-level chip.
This means I really won’t be able to afford the new MBP’s. I mean, considering the price range of my 512GB iPP I don’t want to know what a MBP rocking an insane chip is going to go for. That or the new iPP’s are just ridiculously overpriced. Which ya, know… Apple…. Premiums.
 
I did too, after I realized my goof. As the saying goes, if you can't laugh at yourself, then somebody else is. I knew that @cmaier had left roughly around the time of the Geode acquisition from NatSemi. It should have been obvious to me that he was talking about the dumpster fire that was Bulldozer, but I had embedded chips stuck in my head. Hence, accidental hilarity ensued, and I'm oddly proud of that.

That would be a good naming scheme. AMD Dumpster File. AMD Hindenburg. AMD Chernobyl. AMD Plague.

We went with elemental gasses, dinosaurs from Land Before Time, and hammers.
 
That would be a good naming scheme. AMD Dumpster File. AMD Hindenburg. AMD Chernobyl. AMD Plague.

We went with elemental gasses, dinosaurs from Land Before Time, and hammers.
Remember when Cyrix had to rename their "Jedi" chip because Lucasfilm threatened to sue them. Keep in mind that this was an internal codename, not an actual product release. They quickly changed it to "Gobi" just to avoid what would have been one of the dumbest trademark lawsuits in history.
 
One of the early PowerMacs was given the code name "Sagan". Carl Sagan sent a cease and desist letter. Apple then changed it to BHA for Butt Head Astronomer.
 
Remember when Cyrix had to rename their "Jedi" chip because Lucasfilm threatened to sue them. Keep in mind that this was an internal codename, not an actual product release. They quickly changed it to "Gobi" just to avoid what would have been one of the dumbest trademark lawsuits in history.

I do remember that. Which is funny, because every Silicon Valley conference room is named Endor or Alderaan / Enterprise or Holodeck, etc. (AMD was a Star Trek naming scheme. Intel named its conference rooms after the 7 rings of hell. Ok. That last part not really. I think they just numbered them, but I haven’t been in one for 25 years or so).
 
One of the early PowerMacs was given the code name "Sagan". Carl Sagan sent a cease and desist letter. Apple then changed it to BHA for Butt Head Astronomer.
The "Sosumi" sound on early Macs was also a response from Apple to an ongoing lawsuit from Apple Corps (aka the Beatle's record label) regarding Apple's use of music in their products.
 
because every Silicon Valley conference room is named Endor or Alderaan / Enterprise or Holodeck, etc.
I’m not sure I’d want to name anything Alderaan considering what happens in the movie. It’s like naming a car “Vega” (or anything “Vega”, the name is cursed)
 
Threads != cores though. The best analogy I can give is that threads are like hands, you are 2 threads 1 core (In this analogy) but you still only have one mouth. You can only eat one thing at a time. You could have 16 hands (threads) but still only one mouth (cores) so it's only going to get you so far (Which is why we don't see 4 core 16 thread CPU's)

There's a reason why Intel has never marketed threads as cores, because they're not. They know that, we know that.

cores shared a floating point unit, and therefore wasn’t a complete core

Cores are independent entities which are effectively mini-chips. Hardware threads are reusing the logic units within the same core during their idle period. This works for like workloads but when the pipeline needs to be flushed then everything stalls until the pipeline get filled again.

Hyperthreading is just there because x86 instructions are different length. By feeding 8 threads, it is much more likely that all 4 cores will be used close to the maximum. If you send 8 identical-length instructions to a hyper threaded x86 it will take twice as long as 4 identical-length instructions.

That's the problem right there.
People are struggling with and arguing about what's essentially a marketing term.
A "core" doesn't really mean anything. There are parts in a modern CPU that are there once, there are parts that are there two, four, eight, sixteen or hundreds of times. What is usually called a core is just a larger bunch of CPU parts that repeats several times. What we call a dual core with HT could be easily called quad core, it's just that the cores' pairs would share a lot more resources, and we (they) decided to call that tight bond "hyperthreading" instead. And many more bonds are still there, it's not like you can yank a core from a CPU and expect it to work, the cores still share resources (like caches) and (just like two threads on a hyperthreaded core) they affect each other a LOT, e.g. a workload on a single core can eat lots of memory bandwidth and cause increased latency and cache misses on different cores.

I've already said it here somewhere and I'll say it again.

As a programmer, I don't give a crap about "cores".
I don't give a crap about single"core" performance or multi"core" performance.
I think in threads.


Either my code is single-threaded, and in that case I only care how fast the CPU is able to execute it.

Or it's multithreaded, and in that case I need to know the number of threads I have to throw at the CPU in order to squeeze all the available performance out of it.

For an old-school quad core and a general workload (memory access AND number crunching) that number could be 4 threads.
For a quad core with HT that number could very well be 8, because some number-crunching parts of the CPU are present more than four times and my threads can benefit from this even though they'll compete for different parts of the CPU and maybe stall each other more often.
For a big.LITTLE CPU that number can by anywhere between 4 and 8, maybe the little cores are really slow and shouldn't really be used for intensive workloads.

Programmers talk about threads, because that's what programs are made of.
Engineers sometimes talk about cores, but in the context of a specific architecture, where the term "core" refers to a specific set of CPU parts.
Marketing departments talk about cores when they need a big number to show on the box.

Laymen have heated discussions about the number of cores and "single core performance" when they could be tending to the garden or playing with the dog or doing anything more enjoyable than falling into this trap.

Btw the same goes for RISC vs CISC.
 
  • Like
Reactions: leman
Nice to see another competent post from @Toutou! Always a pleasure to read content by people who know what they are talking about.
 
I've already said it here somewhere and I'll say it again.

As a programmer, I don't give a crap about "cores".
I don't give a crap about single"core" performance or multi"core" performance.
I think in threads.
Do you actually know what a thread is? How will your application be impacted on a scaled basis? If you don’t think about these issues then I’m not going to call you a programmer. Maybe a coder.
 
Do you actually know what a thread is? How will your application be impacted on a scaled basis? If you don’t think about these issues then I’m not going to call you a programmer. Maybe a coder.

Not the person you quoted, but I am wondering what exactly are you getting at. Can you explain in more depth?
 
Do you actually know what a thread is? How will your application be impacted on a scaled basis? If you don’t think about these issues then I’m not going to call you a programmer. Maybe a coder.
Usually the OS supplies APIs to determine available resources. Or in the case of something Grand Central Dispatch abstract away the hardware nearly completely and let the OS handle it.
 
That's the problem right there.
People are struggling with and arguing about what's essentially a marketing term.
A "core" doesn't really mean anything. There are parts in a modern CPU that are there once, there are parts that are there two, four, eight, sixteen or hundreds of times. What is usually called a core is just a larger bunch of CPU parts that repeats several times. What we call a dual core with HT could be easily called quad core, it's just that the cores' pairs would share a lot more resources, and we (they) decided to call that tight bond "hyperthreading" instead. And many more bonds are still there, it's not like you can yank a core from a CPU and expect it to work, the cores still share resources (like caches) and (just like two threads on a hyperthreaded core) they affect each other a LOT, e.g. a workload on a single core can eat lots of memory bandwidth and cause increased latency and cache misses on different cores.

I've already said it here somewhere and I'll say it again.

As a programmer, I don't give a crap about "cores".
I don't give a crap about single"core" performance or multi"core" performance.
I think in threads.


Either my code is single-threaded, and in that case I only care how fast the CPU is able to execute it.

Or it's multithreaded, and in that case I need to know the number of threads I have to throw at the CPU in order to squeeze all the available performance out of it.

For an old-school quad core and a general workload (memory access AND number crunching) that number could be 4 threads.
For a quad core with HT that number could very well be 8, because some number-crunching parts of the CPU are present more than four times and my threads can benefit from this even though they'll compete for different parts of the CPU and maybe stall each other more often.
For a big.LITTLE CPU that number can by anywhere between 4 and 8, maybe the little cores are really slow and shouldn't really be used for intensive workloads.

Programmers talk about threads, because that's what programs are made of.
Engineers sometimes talk about cores, but in the context of a specific architecture, where the term "core" refers to a specific set of CPU parts.
Marketing departments talk about cores when they need a big number to show on the box.

Laymen have heated discussions about the number of cores and "single core performance" when they could be tending to the garden or playing with the dog or doing anything more enjoyable than falling into this trap.

Btw the same goes for RISC vs CISC.

CPU engineers are pretty sure that they understand what a core is, irrespective of architecture, and we would never call a 2-core chip with 2-way HT a 4-core chip, and the differences between those two concepts is pretty clear.

We also have a clear understanding of RISC vs CISC and the difference there are also not at all blurry.
 
  • Like
Reactions: hxlover904 and KPOM
I would love to answer your question, but the second sentence doesn't make sense. I don't understand what you're asking.
As for the first question, I do know what a thread is.

Likewise. I’m puzzled as well.

CPU engineers are pretty sure that they understand what a core is, irrespective of architecture, and we would never call a 2-core chip with 2-way HT a 4-core chip, and the differences between those two concepts is pretty clear.

We also have a clear understanding of RISC vs CISC and the difference there are also not at all blurry.

Yes, but CPU engineers are not everybody and the CPU engineering context is not the only context. As a user (or a developer), I don’t care whether a CPU has 16 separate cores or a single core with 16 instruction streams if the performance, power consumption and price are the same. Literally doesn’t make a difference.
 
I would love to answer your question, but the second sentence doesn't make sense. I don't understand what you're asking.
As for the first question, I do know what a thread is.
Enlighten me as to what you think a thread is. My second question is how asking how you will deal with scaling issues if all you care about it threads.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.