If anything, Apple's GPU project is just getting started. It wasn't so long ago that they licensed from Imagination.
I'm not worried about that.
There's a limit however to how far they can go without shrinking their dies or going bigger.
If anything, Apple's GPU project is just getting started. It wasn't so long ago that they licensed from Imagination.
I'm not worried about that.
There's a limit however to how far they can go without shrinking their dies or going bigger.
Nah, it is bulky cause folks want "cool" and "quiet" GPU's thus the coolers are monstrous. The workstation equivelent cards are smaller (and use less power).No, it's bulky because NVIDIA had a hard time optimizing their design. We don't have to make excuses for it.
This. Also, the data centre/scientific computing L40 cards are fully passive and not any bigger than the A40 they replace. I suspect that nVidia also assumes that 'bigger seems newer/more powerful' and, hence, can justify better their atrocious prices.Nah, it is bulky cause folks want "cool" and "quiet" GPU's thus the coolers are monstrous. The workstation equivelent cards are smaller (and use less power).
What's the point? If you are committed to the old bottleneck model just get a PC workstation.
Well that’s one way to look at it. But graphics cards exist for a reason. That’s a lot of hardware to integrate. If anyone could find a way it’s Apple but their integrated GPUs are still not as good as discrete cards, and the graphics bus has always been the fastest. I think it’s worth doing the processing on dedicated hardware and having a dedicated high bandwidth channel.
It's about long-term vision, really. The paradigm is shifting to heterogeneous workflows, raytracing, procedural graphics, machine learning etc. Memory bus is absolutely going to become a problem before long. Whoever didn't take precautions will be left by the roadside very quickly. The big semiconductor companies are painfully aware of this. As I mentioned earlier, Nvidia was publicly talking about these issues years ago, which is why they research processing-in-memory and similar solutions. Intel and AMD are working on tight integration of CPU and GPU into one heterogenous machine, Nvidia was interested in acquiring ARM for a similar reason. The paradigm is shifting. At least that's what I believe.
We can talk again in a couple of years![]()
Well that’s one way to look at it. But graphics cards exist for a reason.
Or instead of integrating, they could just throw everything into the GPU, as computers are ever-more reliant on them anyway. Some programs will even use GPU cores for regular calculations to speed things up.
People that are expecting the kind of jump we saw from Intel to the M1 every year frankly have insanely unrealistic expectations. It was never going to happen.When Apple isolated themselves by deprecating OpenGL, and discontinued 32-bit and eGPU support, they started playing a dangerous game: their bet is that their new architecture is so much better that Apple Silicon would be adopted en masse, which would make everyone play in their hands.
That thought is misguided even if the PC alternative is indeed much worse. But if Apple's alternative starts lagging behind and performance doesn't get so impressive over time, the argument for an Apple Silicon machine is less impressive. Why bother spending extra on a machine that is only slightly better or even not better at all than an AMD or Intel chipset?
Sure, right now they have the edge. But if they don't keep up, that edge will eventually disappear.
You're kinda burying the lede about undervolting the CPU there champ. This is how the i7 and i9 run stock with TVB, that is how it's designed to run OOB.You need to reseat that heatsink. If done properly you won't get anywhere close to 100c and throttling.
With bios setting 288w power limit 95c, and a -0.4 core voltage mine doesn't go over 90c and still above 30k in Cinebench.
Don't forget many people buy Macs due to preference over Windows. Who cares if the chip is only comparable to an Intel or AMD chip when the experience is still solid? The "extra" you pay for a Mac can be for many things other than the chip performance for example the longer life expectancy of the hardware, less malware/viruses, better optimised software etc etc...
There likely isn't enough demand for desktop computers to justify Apple creating a custom chip that can rival the top Intel / AMD / Nvidia cards in terms of raw performance, and maybe the point is that Apple doesn't need to.
I think that's what I like about Apple these days - they are being extremely disciplined.Or maybe Tim Cook is resting on his laurels and not capitalizing on the buzz and success that the Apple Silicon line brought. The last Apple innovation was the Apple Pencil, around four years ago. If it hadn't been invented, the iPad would be a flop (because it's the one thing that REALLY sets it apart from other tablets).
I think that's what I like about Apple these days - they are being extremely disciplined.
I believe that Apple has a very clear product roadmap and they know well enough to stick to it, rather than try to capitalise on all this buzz and get suckered into a meaningless arms race with other chip manufacturers just for the sake of grabbing a Theverge headline. Apple doesn't need a chip that beats the competition in raw benchmarks. They need a chip that enables the best user experience in their products, given the constraints of said products. And I feel that's something Apple has accomplished admirably.
There won’t be third party graphics cards.What will gaming and third party graphics card use be like with Apple silicon in the future?
Maybe Apple decided to cripple its line up in 2022/2023 with a 5nm M2, until 2024`s 4nm M3 (built in Arizona)?
They just need to give users more versatile products. For example, allowing for eGPUs in their Macbooks or M1 Mac virtualization would require zero changes to the form factor while giving users many more usecase options.
We are just expecting a chip update, not a new design, so the Mac Studio could be included.