They showed both. AFAIK Firmament doesn't have any RT, while Myst does.You mean Firmament, I think.
Was it related to Capcom announcing improved RTGI and Mesh Shader support in the next iteration of RE Engine?There was supposed to be something else, which was apparently filmed, but probably too glitchy to show. I wonder if the software is ready for MS and RT or if they're still working on it. Anyway, something for next year then.
It was never supposed to be an exclusive gaming event, but felt a little thin as they probably could have shown more (and even if it would have been some released game benchmark... "this is x times faster than M2").
I'm a little confused on the configuration options. So I can get 96GB on some of the smaller SoCs, but not 64GB... I have to take the massive price for the memory or I need to upgrade the SoC and then add memory to 64GB. And then there's the largest SoC which gives 48GB, 64GB or 128GB, but not 96GB. Odd.
Right now I'm more interest in the scientific computing aspect they mentioned multiple times. How much does the M3 add to the ML game, say compare the maxed out version with 128GB to which Nvidia card for TF, PyTorch and pure compute in CUDA/Metal.
They do mention ML tho but iirc scientific computing is different right? I don't expect to use TF and PyTorch when I'm doing astronomical simulations since it will be heavy C, Fortran, and Python(data analysis and not ML) based on my professors in the Astro dept in my uni.Right now I'm more interest in the scientific computing aspect they mentioned multiple times. How much does the M3 add to the ML game, say compare the maxed out version with 128GB to which Nvidia card for TF, PyTorch and pure compute in CUDA/Metal.
Anandtech
"In Windows parlance, the M3 GPU architecture would be a DirectX 12 Ultimate-class (feature level 12_2) design, making Apple the second vendor to ship such a high-feature integrated GPU within a laptop SoC."
Metal 3 does Mesh shaders, and is fully supported by M2 GPUs. What am my missing?mesh shading upends the entire geometry rendering pipeline to allow for far more geometric detail at usable frame rates. It’s very much a “baseline” feature – developers need to design the core of their engines around it – so it won’t see much in the way of initial adoption, but it will eventually be a make-or-break feature that serves as the demarcation point for compatibility with pre-M3 GPUs.
M2 doesn't have hardware accelerated mesh shaders. That's an M3 feature. For why it's important see Alan Wake 2's performance on GPUs that do and don't support mesh shaders. A Geforce 1660 can outperform a 1080 because of it.Metal 3 does Mesh shaders, and is fully supported by M2 GPUs. What am my missing?
They don't execute on the CPU, do they?M2 doesn't have hardware accelerated mesh shaders.
I don't think so, but the lack of specific bits in the GPU to handle them does hinder performance. We could see this with RT too, Nvidia allowed using RT on the 1080 Ti post-20 series launch it still ran terribly due to the lack of RT cores.They don't execute on the CPU, do they?
Having the feature, plus having good ray reconstruction along with upscaling means it is within the grasp of lower end hardware and developers can stop using faked GI (and shadows and reflections).Interesting report from Anandtech. Seems like the M3 Pro is less "Pro" than the M2 Pro was (slower memory bus, less performance and GPU cores). Performance should still be better though.
I wonder what's the use for ray tracing on the standard M3. That chip will be too slow to run the games that support RT (except for "slow paced" games such as Myst where 60 fps might not be necessary).
M3 Max seems like a beast - but it's veeeery expansive.
Probably get converted to whatever fallback Apple would use (vertex shaders?).They don't execute on the CPU, do they?
No, not related to Capcom.Was it related to Capcom announcing improved RTGI and Mesh Shader support in the next iteration of RE Engine?
I'd certainly count ML as scientific computing if it's used for scientific applications. The math is often very similar or the same. With pure C and Fortran you're usually out of luck, as these are mostly not very well parallelized for either CPU or GPU. With Python you likely have more luck if you're using a modern framework. I've done a bit of radio astronomy in the past with my students, but they're usually CS, math or EE students, not physics. Most software in that field is rather old and somewhat outdated. But that doesn't mean you couldn't take advantage of GPUs in astronomy. For example, you can easily use a neural network (on a GPU) to detect Einstein rings or gravitational lensing in general.They do mention ML tho but iirc scientific computing is different right?
Sonic Dream Team is coming to Mac on Dec 5th.
They run on the compute path. Very efficient way of emulating them, still faster than the traditional render path. But dedicated hardware will be a huge step up.They don't execute on the CPU, do they?
Oh this will be on Apple Arcade, so is it just macOS or will it hit iPhone/iPad as well?Sonic Dream Team is coming to Mac on Dec 5th.
Oh this will be on Apple Arcade, so is it just macOS or will it hit iPhone/iPad as well?
Isn't C and Fortran easy to parallelized. I swear I've seen a thread where someone tested a fortran benchmark and it shows almost perfect scaling on an AS Macs.With pure C and Fortran you're usually out of luck, as these are mostly not very well parallelized for either CPU or GPU.
Oh.I've done a bit of radio astronomy in the past with my students, but they're usually CS, math or EE students, not physics.
Has macOS share ever been below 1% in the hardware survey?Steam survey for Oct is out. Windows takes equal shares from macOS and Linux. After steady growth for Apple Silicon each month the growth has leveled off with a minor change this time.
View attachment 2305824View attachment 2305825View attachment 2305826