Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Usually the OS supplies APIs to determine available resources. Or in the case of something Grand Central Dispatch abstract away the hardware nearly completely and let the OS handle it.
Some do and it does but it all depends on what workload you're trying to perform. Ergo the question asking about defining a thread and how to handle scalability. Anybody who says they don't care about the hardware they're running on isn't a programmer to me. They're just coding.
 
My second question is how asking how you will deal with scaling issues if all you care about it threads.

What scaling issues? Your question is so vague that I really have trouble understanding what you are getting at. As to why one should care about threads, it’s fairly obvious - threads are the most fundamental (and really, the only one) organizational unit of execution in the mainstream CPU computation model. Once (and if) we ever move to a different model of computation, we will stop caring about threads. For example, when I program a GPU, I don’t care that much about thread performance, I care about utilization.
 
  • Like
Reactions: bobcomer
What scaling issues? Your question is so vague that I really have trouble understanding what you are getting at. As to why one should care about threads, it’s fairly obvious - threads are the most fundamental (and really, the only one) organizational unit of execution in the mainstream CPU computation model. Once (and if) we ever move to a different model of computation, we will stop caring about threads. For example, when I program a GPU, I don’t care that much about thread performance, I care about utilization.
I remember interviewing somebody who used threading in Python a lot and they were damn proud of it. They didn't get the job either because they didn't understand what threading was and couldn't even tell me the limitation threading has in Python. If somebody can't understand scalability, threads, cores, etc, and how they are all interconnected then there isn't a programmer before us.
 
  • Apple would have made the M1 (A14X) even if they didn't transition away from Intel
  • M1 was made to run fanless and has 4/4 cores which is basically what the A14X would have been
  • If Apple didn't ship any Macs using the A14X, they wouldn't have shipped any Apple Silicon Macs until the new MBPs later this year which would have been over a year from the ARM transition announcement to a single shipped product. Too long.
  • People are shocked that Apple put the M1 into the iPad Pro. Well... what else could they have put in there?
  • Explains the limited IO of M1
So why does it matter?
  • By now, it should be obvious that the new MBPs will be based on A15 cores. No announcement at WWDC should make this even more likely because new MBPs will launch close to or after the iPhone 13.
  • This means the low end Macs will not receive latest core designs first each year. It never made much sense for low end Macs to destroy MBPs in single core performance for 9+ months every year.
  • We should expect a large jump in performance from the M1 to new MBP SoCs because those SoCs were truly designed for Macs from the ground up and has A15 cores.
  • It's possible that Apple will never make another "A#X" SoC again. Instead, they might simply bin lower quality and defective SoCs from the J-Die Chop (sounds silly, I know). For example, instead of 8/2 CPU cores, they will disable 2 high-performance cores for the Macbook Air and iPad Pro. Doing this will save Apple designing resources and make use of defective SoCs.
  • Alternatively, Apple may continue to design and produce "A#X" SoCs for iPads/low-end Macs because each of the J-Die Chop takes up too much space on a 300mm TSMC wafer. However, Apple will call this "M" SoCs from now on.
  • I don't believe Apple will use "M2X" name because M is associated with low-end devices. I believe Apple will call MBP SoCs something like "P2" meaning Pro. Then they could market them as "P2 10-Core Processor. 32-Core GPU". Maybe someday we'll get something like "P4 64-Core Processor. 128-Core GPU" for the Mac Pro.
What's in a name? That which we call a rose,
By any other name would smell as sweet.
 
I remember interviewing somebody who used threading in Python a lot and they were damn proud of it. They didn't get the job either because they didn't understand what threading was and couldn't even tell me the limitation threading has in Python. If somebody can't understand scalability, threads, cores, etc, and how they are all interconnected then there isn't a programmer before us.

I think you might be massively misunderstanding Toutou‘s post. It’s not about spawning threads as a maniac. It’s about threads being a basic container of computation.
 
Enlighten me as to what you think a thread is.
I'm not sure this is something I want to get into, especially if this is your attitude. I have a degree in software engineering, I've written C++ code with pthreads, mutexes and semaphores, I've malloc'd my own memory, I've programmed FPGAs, written some assembly, had an exam on Lisp, wrote a PID controller for a little motor on an ARM dev board.
Currently I'm a webdev, I mainly write Ruby and I enjoy not giving a damn about low-level stuff anymore (yes I've said it). But yes, I still remember what a thread is.
couldn't even tell me the limitation threading has in Python
I agree that a good Python programmer (and a Ruby programmer) should at least know what GIL is and how concurrency != parallelism, but I wouldn't consider it that critical. We have a JavaScript prodigy in our company who has no idea what Lisp is or how mutexes work, but guess what, he is insanely knowledgeable about JavaScript and browsers. People arrive to programming from different directions, having taken wildly different paths.
isn't a programmer before us
No true Scotsman?
CPU engineers are pretty sure that they understand what a core is, irrespective of architecture, and we would never call a 2-core chip with 2-way HT a 4-core chip, and the differences between those two concepts is pretty clear.
Feel free to state your opinion on what makes a core. The HT example is a little extreme, but I'm pretty sure that there are (or were) CPUs that shared SIMD units between "cores", while other CPUs use multiple FPUs for every "core" without sharing, cores often share at least some levels of caches and I remember the first Intel quad core CPUs were technically two dual core CPUs (so there must have been some redundant circuitry) and that they were slower than expected because of this architecture.
 
I'm not sure this is something I want to get into, especially if this is your attitude. I have a degree in software engineering, I've written C++ code with pthreads, mutexes and semaphores, I've malloc'd my own memory, I've programmed FPGAs, written some assembly, had an exam on Lisp, wrote a PID controller for a little motor on an ARM dev board.
Currently I'm a webdev, I mainly write Ruby and I enjoy not giving a damn about low-level stuff anymore (yes I've said it). But yes, I still remember what a thread is.
Then you'll know that difference between a kernel thread being prioritized on a core and what people confuse with user library threading. I do care about this "low-level stuff" because I find web development to be a garbage dump full of clueless coders whose only interest is to indulge in the latest doo dah in their favorite language. I can't and no longer live in that environment as programmers rarely exist.
I agree that a good Python programmer (and a Ruby programmer) should at least know what GIL is and how concurrency != parallelism, but I wouldn't consider it that critical. We have a JavaScript prodigy in our company who has no idea what Lisp is or how mutexes work, but guess what, he is insanely knowledgeable about JavaScript and browsers. People arrive to programming from different directions, having taken wildly different paths.
A competent Python programmer should know what the GIL is but they don't just as they don't know the difference between concurrency and parallelism but they are critical. It's why the web is full of all this trash because it's written by clueless programmers.

Faster hardware is no excuse for garbage coding. It's how we end up with the dumpster that exists today.
 
Then you'll know that difference between a kernel thread being prioritized on a core and what people confuse with user library threading. I do care about this "low-level stuff" because I find web development to be a garbage dump full of clueless coders whose only interest is to indulge in the latest doo dah in their favorite language. I can't and no longer live in that environment as programmers rarely exist.

A competent Python programmer should know what the GIL is but they don't just as they don't know the difference between concurrency and parallelism but they are critical. It's why the web is full of all this trash because it's written by clueless programmers.

Faster hardware is no excuse for garbage coding. It's how we end up with the dumpster that exists today.
It is true that web dev has a proportion of bad programmers since it’s how a lot of people start learning programming.

And yes, there’s a lot of trash on the web. That’s because there are also a lot more gems on the web. That’s usually what happens when you have an insanely platform that is browsers. You get a lot of trash and a lot of gems.

As someone who programmed for the front end and the backend, the backend is significantly easier to program in my humble opinion. The browser is hugely unpredictable with so many different versions. And it’s closest to the uses so even more unpredictability.

Front end programming is hard. But regardless, most people don’t just do front end anymore. They do a combination of front end, backend, and sometimes even apps.
 
Front end programming is hard. But regardless, most people don’t just do front end anymore. They do a combination of front end, backend, and sometimes even apps.
This is the scary part. Front end is full of people I wouldn't give jobs to and they go down the stack to where it really can be critical to the business. I couldn't deal with these people so had to exit for my sanity. A lot of them are clueless on basic computing paradigms. I fear for the future.
 
This is the scary part. Front end is full of people I wouldn't give jobs to and they go down the stack to where it really can be critical to the business. I couldn't deal with these people so had to exit for my sanity. A lot of them are clueless on basic computing paradigms. I fear for the future.
In terms of applications, much of the backend complexity has been moved to the front end. Microsoft Office runs on a browser. The most popular UI/UX design tool is browser-based: Figma. You can then wrap these web applications with Electron and ship a desktop version. It's how many modern user-facing apps are built.

In addition, a lot of backend services have been abstracted by SaaS/PaaS. Need an authentication server? Use Auth0 or Firebase. Need a caching server? use ElastiCache. Need a text search? ElastiSearch. Logging? Papertrail.

My San Francisco/Silicon Valley based tech company uses mostly full-stack developers - that is they can work on web, app, and backend. We have a small central team that builds services for full-stack developers to consume. This allows us to move extremely fast and have feature parity between iOS/Android/Web/Desktop. I'm guessing that this is how a vast majority of your favorite internet-based apps are developed as well.

We're not in 2003 anymore where front-end devs just know HTML/CSS and a bit of Javascript and the "real devs" work on the backend.

Perhaps you feel a bit threatened by this paradigm switch?
 
Last edited:
It can run 8 hardware threads at once, so yes, it can definitely "truly process" 8 things simultaneously. M1 also supports 8 hardware threads at once.

Anyway, this discussion illustrates very well what I mean. What exactly are you counting as cores? The ability to run simultaneous execution contexts (threads)? Independent hardware backends? Independent hardware frontends? Hardware register sets? Execution schedulers? Modular hardware building blocks? Any of these things make sense, this way or another, as cores.

For a consumer, a core, is well, a certain promise of performance. A consumer expects an 8-core CPU to be roughly twice as fast as a 4-core CPU. This is generally true with a symmetric design, not so much with an asymmetric design.

Ah, well, it's all moot in the end. One can make this things arbitrarily complex as well as arbitrarily simple. The only sure thing is that the convenient times when we had a single CPU core and reasoning about these things was easy are long gone :)
Definetly HT is not the same as “core” there are few differences and limitations, even the new SMT in the Ryzen is far better than Intel’s HT
 
I find it tedious in truth. I prefer to be closer to the hardware as that's what motivates me. Running after the latest fad in the stack is just boring to me which is why I excited that world to return to my roots.

I think one can often benefit from learning new technologies, assuming of course those technologies are based on something solid. We have learned a lot of things in the last two decades and we have learned that not all choices made in the past were optimal (cough, C++, cough).

The nice thing about web toolkits is that they drive research forward. Reactive programming, async/await, rekindled interest in functional paradigms — all these things, which are cornerstones of modern high-performance programming, were ultimately driven by forward-thinking web frameworks. Now, I imagine that working with those frameworks is a stressful roller coaster (which is also why I never want to do web apps), but that's a different question.
 
I think one can often benefit from learning new technologies, assuming of course those technologies are based on something solid. We have learned a lot of things in the last two decades and we have learned that not all choices made in the past were optimal (cough, C++, cough).

The nice thing about web toolkits is that they drive research forward. Reactive programming, async/await, rekindled interest in functional paradigms — all these things, which are cornerstones of modern high-performance programming, were ultimately driven by forward-thinking web frameworks. Now, I imagine that working with those frameworks is a stressful roller coaster (which is also why I never want to do web apps), but that's a different question.
You are ultimately right that they do drive things forward. There is value add in things like Marshmallow that allows for rich specification of parsing of inputs to a REST API but well, meh. I'd much rather do low level programming of an MCU to drive fan speeds in C.
 
Everybody is assuming that the higher end machines will have a beefier single ARM based chip in there. Wherein it's cheaper to shovel in a couple of ARM processors as that solves your memory bandwidth issue per chip and reduces development costs. All you need as a high speed interconnect which PCIe gives you for bandwidth between CPU chips. It's been done many times before.
I doubt this is the way Apple will go, it's inelegant and they have so much headroom (physical space and thermals) for increased core counts that I doubt it will be necessary. In the Mac Pro we might see upgradeable RAM and they'll use tiering in their OS memory management to prioritise on-package RAM over external DIMM (or whatever form factor they choose) and dedicated upgradeable GPUs (probably proprietary). I guess we will see.

Yes, what you're suggesting has been done before, but usually because of physical or technical limitations with the individual CPUs. Besides, this approach has rapidly fallen out of favour in the last 5-10 years (even in the datacenter) given the core counts that Intel and AMD have been able to achieve per CPU.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.