Apple would certainly never run an ad mocking a competitor.…why?
As a juvenile gotcha?
Apple would certainly never run an ad mocking a competitor.…why?
As a juvenile gotcha?
But I suspect few people need "full Outlook". There's some things I miss in the Mac version, but nowhere near enough that it would make me go "boy, a Mac is completely unworkable". And of the few people who do need it, I suspect most wouldn't need to run it locally; RDP is enough.
Apple would certainly never run an ad mocking a competitor.
It isn’t just petty advertising. It brings attention that when you have a need for general purpose HPC you are probably not going to find an Apple solution. If you are doing video work where the specific algorithms used are implemented directly in hardware in the M1 you are set but you shouldn’t expect that performance to translate to a broader more general set of problems such as those that rely on like using FDTD, FEM, etc.When Apple does petty ads, I don’t care for it either. Why call for it?
There are cables you can buy for all those fantasyland display/connection options.To connect a display that exists in the real world, not fantasyland.
There are cables you can buy for all those fantasyland display/connection options.![]()
When they list the followingOrrrrrrr the computer could simply, gasp, include an HDMI port, which is hugely convenient for almost any monitor, TV, or projector.
(But also, the point wasn’t even that. It was that Apple’s chips should implement HDMI 2.1 rather than 2.0A.)
When they list the following
Did you ever get the impression that M1 family hardware video support was specific to a HDMI version other then the generic HDMI 2.0A/B?
- A display engine that drives external displays.
- Integrated Thunderbolt 4 controllers that offer more I/O bandwidth than before.
Also there is some discussion on Reddit that macOS itself doesn't not allow exceeding 4k 60hz via HDMI? link
Studio Display has one Thunderbolt 3 (USB-C) port, three USB-C ports... no HDMI. Now I have a HDMI port on my Mini that I will not be using. If it was a TB 3, it is usable for me. That to me is a waste.Orrrrrrr the computer could simply, gasp, include an HDMI port, which is hugely convenient for almost any monitor, TV, or projector.
(But also, the point wasn’t even that. It was that Apple’s chips should implement HDMI 2.1 rather than 2.0A.)
Call it M1 Supreme.Mac Studio is a consumer device.
For Mac Pro, they can call it M1 Titan, or something similar.
Also, Xeon is not a word.
The entry level Max is priced about $250 too high and the Ultra is priced about $1500 too high for the added utility they each provide. The M1 mini is far more compelling for most use cases.
An expandable upcoming Mac Pro will steal from Ultra sales.
Thanks for that pointer to eclectic light. When I find time, I’ll probably skim through that in more detail, but found this interesting:eclecticlight.co has been trying to look into this. It's not easy to find a use case that can be timed, and things appear to be very much in flux, but one case that can be tested is Visual Look Up (have your Mac recognize a piece of art work, a type of flower, or whatever). He verified that this, on a "single" M1 takes about half the time it takes on a high end Intel Mac (I think he use an iMac Pro).
This gives us one datapoint, but leaves unspecified whether the task would be another 2x faster on an Ultra.
To be fair NPU is somewhat like where GPGPU was about 15 years ago (when CUDA just came out). There's an expectation that great things are possible, but also that everything is in flux at both the HW and SW levels.
Apple uses this stuff right now for photo classification, image lookup, live text, and some parts of Siri (I think both the voice analysis and the voice synthesis, but mostly not the actual "answer/task generation). What's unclear, for example, is whether they even use the NPU yet for language tasks (like translation); so much of that stuff is old code that runs on pre-NPU devices, and there's always a tension to keep that running (and backwards compatible) vs throw it away, start from scratch, and just say "Language Processing 2.0 only runs on A14/M1 and later".
A similar question could arise regarding encoding. If I have two encoder engines available in an M1 Ultra, can I encode to h.265 at higher quality? Can I even do the simpler task pf performing two such hardware encodes at once?
In one sense you can say "Of course you should be able to, anything else is dumb"; in another sense, building up ANY serious API/driver infrastructure takes time, it really does, and often the way this plays out is by the time the nicely functioning versions of all these APIs ship, it's three years after the first hardware shipped.
I mean, even something like the Live Text and Visual Lookup UI's are (let's be honest) pretty awful; they do the job but are so damn clumsy! It takes a year to get the basic tech into people's hands, then at least another year to see how people use it and figure out a better UI packaging.
Based on what I read in the link above, it sounds like most third party AI work is still hitting the CPU/GPU Rather than the neural engines. If that’s true, then they are mostly being used for features in the OS and probably won’t benefit from the additional cores. The OS features are designed to run a common set of functionality fast enough to be transparent. Doubling the cores won’t help unless they’re going to make certain capabilities only available on the Ultra Studio, which they won’t.Of course! That was basically essential before the device could ship, since it was going to be a selling feature.
And obviously this would be a goal for the NPU (and media systems) going forward. The question is whether it has been achieved today.
I think the conclusion is that the neural engines are under utilized,
Same for the media engines— so far it doesn’t seem like tools like Handbrake are fully utilizing the media engines, but I’ll be interested to see if they eventually do.
It isn’t just petty advertising. It brings attention that when you have a need for general purpose HPC you are probably not going to find an Apple solution. If you are doing video work where the specific algorithms used are implemented directly in hardware in the M1 you are set but you shouldn’t expect that performance to translate to a broader more general set of problems such as those that rely on like using FDTD, FEM, etc.
Ha. That prob makes more sense. 😜I was thinking they would stack them on top of each other.
My worry is that’ll only close to 4x the performance but at what measure? The Ultra is not a double x jumó in performance in all measures, the majority but not all.Every single leak from Gurman on the post M1 release rollouts has come true, along with the Jade-C core photos that clearly show the M1, M1 Pro (Jade-C Chop) M1 Max (Jade-C), M1 Ultra (Jade-2c) and one unreleased monster core labeled Jade-4C that's twice the size of the Ultra.
That word has a bad connotation with power-grabby supremacy, violence and dominance.Call it M1 Supreme.
Where the Ultra connects two chips vertically, the Ultra+ will take two double-stacked Ultras and connect them horizontally. Boom.
When you have to simulate a 3D structure (for instance an optical waveguide in my case but really any problem that requires solving differential equations with given boundary conditions) and you require a fine grid to get the solution to converge a 128GB of main memory will not get you far. If you don’t have enough main memory for the problem it isn’t that it just takes you longer to get the solution but that you crash the operating system and end up with no solution. I’ve seen this with both Windows and Macs. The only machines I’ve seen that can access more than 1 TB of main memory have Intel or IBM chips. I don’t think even AMD makes chips that can access that much memory. In addition companies that make enterprise grade software for electrical and optical engineering design and simulation develop their software to run on x86 under either Windows or Linux. Companies like Mentor Graphics, Keysight and Cadence are not going to develop their software to run natively on a Mac because the whole Mac market is too niche. So that means if you are an engineer that needs to perform that work you need a PC or the current Mac Pro.The M1 Max and Ultra are general purpose computers that are very fast at general purpose problems, not just things they were optimized for.
Surprising HPC results with M1 Max… | Apple Developer Forums
developer.apple.com
To present to any tv or projector made in the last 15 years.Forget HDMI all together and put in an extra TB/USB 4.0 port and use that. That way you get an extra useable port, instead of a port you may never use. I'm just not sure why people want to waste a port on HDMI.
M1 -> M1 Pro -> M1 Max -> M1 Ultra -> M1 Extreme?
The entry level Max is priced about $250 too high and the Ultra is priced about $1500 too high for the added utility they each provide. The M1 mini is far more compelling for most use cases. An expandable upcoming Mac Pro will steal from Ultra sale
if the studio is priced too high - how come the demand is so high that mine won't arrive for 12 weeks? Also - it outperforms myWhen you have to simulate a 3D structure (for instance an optical waveguide in my case but really any problem that requires solving differential equations with given boundary conditions) and you require a fine grid to get the solution to converge a 128GB of main memory will not get you far. If you don’t have enough main memory for the problem it isn’t that it just takes you longer to get the solution but that you crash the operating system and end up with no solution. I’ve seen this with both Windows and Macs. The only machines I’ve seen that can access more than 1 TB of main memory have Intel or IBM chips. I don’t think even AMD makes chips that can access that much memory. In addition companies that make enterprise grade software for electrical and optical engineering design and simulation develop their software to run on x86 under either Windows or Linux. Companies like Mentor Graphics, Keysight and Cadence are not going to develop their software to run natively on a Mac because the whole Mac market is too niche. So that means if you are an engineer that needs to perform that work you need a PC or the current Mac Pro.