Even if Apple intelligence is quite underwhelming right now, I assume they are in it for the long run. M2 ultra is obviously not the best fit even if it seems quite competitive for inference on a perf/watt basis (just speculation)
So, let’s assume apple is building their rumored new server/ws chip what would be the most sane approach from apples perspective? Something that also allows the actual training as well?
So, with the known building blocks at their disposal what could be good approaches? Doing like amd and nvidia with HBM and lots of tensor cores targeting 4 and 8 bit? Some more general approach?
Do “we” have any insights or ideas at all at this time?
What memory types? Interconnects? Balance between cpu , gpu and tensor cores? Clearly RT cores would just occupy space for these purposes but maybe still there but unused in order to get the benefits of reusing and also delivering a high end workstation chip?
Also, what frameworks would they be running? Pytorch is clearly broken on mac and mlx is more like a experimental framework as it is right now. CoreML for inference perhaps but clearly not for training.
Let the speculation run wild 😂
So, let’s assume apple is building their rumored new server/ws chip what would be the most sane approach from apples perspective? Something that also allows the actual training as well?
So, with the known building blocks at their disposal what could be good approaches? Doing like amd and nvidia with HBM and lots of tensor cores targeting 4 and 8 bit? Some more general approach?
Do “we” have any insights or ideas at all at this time?
What memory types? Interconnects? Balance between cpu , gpu and tensor cores? Clearly RT cores would just occupy space for these purposes but maybe still there but unused in order to get the benefits of reusing and also delivering a high end workstation chip?
Also, what frameworks would they be running? Pytorch is clearly broken on mac and mlx is more like a experimental framework as it is right now. CoreML for inference perhaps but clearly not for training.
Let the speculation run wild 😂