I have a serious question that I'd very much appreciate a full range of answers on.
Firstly I LOVE computer equipment getting faster.
It's a personal joy to see things getting quicker and quicker in a noticeable way each year.
I'm sick of Intel offering so little speed bumps on main CPU's over the past years, every since Sandybridge.
So I LOVE the way Apple keep pushing and pushing.
My BIG question is this.
How much software, and what software is out there now, that actually really needs/and/or takes advantage of this power?
With a desktop, you see smoother games, what was a jerky framerate, becomes a smooth one.
Or you not get smoke, shadows, lighting effects you did not have on the old games.
Are programmers REALLY taking full advantage of the very latest hardware that Apple are making for us all to enjoy?
Or are Devs, just stuck back in time, scared to make (let's be honest, GAMES!) that NEED the latest and best from Apple to run well, as they don't wish to lose sales to owners of older phones?
I want the latest and best in hardware, but I hate it, when software writers don't take FULL advantage of the hardware.
So please, I ask you. Can you give me an idea of what the situation is here with the latest and greatest that Apple offers us all?
Hi piggie, just want to clarify some things. Long term Geek here who follows CPU's and PC / Computer development. this is just my caveat that I am not a CPU engineer, so if someone has better information, I'm happy for it.
First:
in regards to Intel's diminishing returns on their higher end architectures. This is an unfortunate limitation that seems to be cropping up due to physics limitations with how transistors in CPUs work.
For CPU's to get noticeably faster in the x86 realm, there are generally two methods. first, is a system redesign. Putting more transistors in place to do more work faster. And more kinds of work. however, a limitation to this is energy loss through heat dissipation. When a transistor in a CPU works, there is always a tiny bit of that electrical energy that gets lost to the air in heat. As more transistors are put into place to do more instructions, they will in turn generate even more heat. Intel learned the hard way during the Pentium 4 era, that there is only so much heat a computer package can safely emit before it becomes uncomfortable for the users (they became mini heaters!) and you risk damage to components.
To counter the heat issue, and allow for less energy to be transferred without leaking heat, while also allowing for more transistors to fit on an existing package, we undergo what is commonly known as a process shrink. you probably have often seen the term "14nm" and "10nm" or similar thrown around. What this actually refers to is the size of the gap between the nodes of a transistor. This gap is often filled with a resister, and in the CPU case, Silicon. as we get smaller, and this gap shrinks, there is less heat loss (cooler transisters).
But we have started running into a physics problem. as we get smaller and smaller in our "gap", we start overcoming the resistive properties of silicon. the gap just is too small and the nodes may 'short out'. which renders the transistor useless. this has cause a dramatic slowdown in the ability to make much smaller transistors. Basically, until we find a replacement for silicon in our CPU's, there is a limit to how small they can go. Which in turn, puts a limit on how much more transistors we can seemingly add without pushing the heat boundaries again.
Because of that, The high end chip makers (especially Intel) have hit a point of diminishing returns. we're just not seeing these tremendous leaps in performance any more. And likely wont for a while, until we can change our very thoeries on transistors (Quantum computing for example ads a 3rd state to a transistor)
second Point:
I love what Apple has managed in the CPU space, but it too is highly dependant on the above. The difference is, and why Intel and the other ARM cpu's are gaining so much performance so fast, is because they come at CPU's from a different angle. Where Intel started from the high point and have had to become cooler and more efficient, the ARM cpu's started as insanely low powered, low energy chips, and have worked their way UP. So we are seeing absolutely tremendous leaps all at once as they start venturing into the more heat / more power world that intel used to be king in.
However, they are still significantly far behind in providing extremely powerful CPU's. but, then again, that's not their goal. They are meant to be far less complex, far more single purpose, and far more power efficient.
if you were to do a clear benchmark for example between the fastest ARM CPU and the fastest x86 CPU, there would be no competition. the x86 CPU at this point would be leaps and bounds more power.
Intel on the other hand has had a dramatic focus on scaling down to compete with ARM. thats where their biggest R&D seems to be these days.
Thirdly:
we have absolutely hit a point in tech where the CPU architectures, are no longer the bottleneck on their platforms. Software seems to be the primary driver. Part of this, especially in gaming IMHO was having the last generation consoles around for far far too long. game dev's were mostly programming to the common denominator, being consoles. 10 year old hardware by the end of it. This was evidenced when we saw games released even in the last couple years, that barely scratched the performance of modern PC's or even mobile devices! They weren't all that incentivised to get better, because they had these really old consoles that weren't being replaced and couldn't power the newer higher end development we wanted. Thankfully, that has started to change in the last year, and games like GTA V, Dragon Age Inquisition, have really started pushing more and more the boundaries of modern hardware. This should eventually trickle its way downwards to everything from Console to mobile.
I know, long post, But i really hope that helps answer your questions.
