Something to remember- Apple has been pushing 4K for a while now with their silicone. Apple TV 4K (don’t know/remember all the specs) uses a A10x. The iPad Pro with A12z pushes 4K over usb C, I think only at 30 FPS, but we have to remember that is also powering it’s on Retina display.
4K video and pictures ( 2 dimensional ) data to the screen isn't a huge "ask".
Amazon Fire 4k and upcoming "Chromecast" , $40-80 dongles do it. Sub $300 TV's processors do it.
4K compressed video decode is a fixed function block sidecar to the GPU processor (usually) . It doesn't necessarily denote much in terms of general purpose "horsepower".
4K 3D rendering is a different "ask" of the system. The "4K" prefix is present on both of those.
so I believe for most day to day stuff Apple silicone can already do what needs to be done for single screen set ups that aren’t majorly intensive.
But so are all the current iMacs (intel based). Run a web browser. Handle Zoom calls, Watch Youtube/Netflixs . Word documents , and some basic spreadsheets. Much around with Photo collection and tweak some 2-3 minute video clips from the phone.
So, I would feel any rumor coming out of the mill would be for a much bigger and grander GPU than what’s tied to their current chips (even the a13, which is only in the phones).
Personally I can’t wait to see what they bring out. 🤤
I would expect that the Apple SoC still had a basic GPU still built into it even on outside chance they are doing discrete GPU ( and embedded) coupled to another place on the logic board.
Or that this "discrete" GPU was part of the same package [ CPU and other stuff die + GPU die + HBM all on a largish interposer package. More so a "scale up" of the die integrated GPU but capping the die sizes at a threshold Apple didn't want to cross. Intel has a similar DG1 that is a "super sized" version of the integrated on-die version of Xe-LP. but still not a mid-high end GPU. ]
if all Apple is trying to peel off the Radeon Pro 555X - 560X ( maybe Pro Vega 20 ) class dGPUs off the iMac 21.5" then that wouldn't be too surprising for 2nd half 2021. The GPU still would be relatively small compared to mainstream desktop GPUs on add-in-cards. There is going to be point at making their "unified memory" , integrated on die GPU was going to run out of steam when connected to the main memory that the ARM cores are using. At some point they won't be able to finesse that with a bigger system cache. ( laptops they can cover most of the models with Unified and bigger cache. )