Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
But I suspect few people need "full Outlook". There's some things I miss in the Mac version, but nowhere near enough that it would make me go "boy, a Mac is completely unworkable". And of the few people who do need it, I suspect most wouldn't need to run it locally; RDP is enough.

Full Outlook? Everyone knows

F33CF223-7F07-47DE-9CC2-00FD0FE7693B.jpeg
 
Full Outlook? Everyone knows

View attachment 1985371

Eh, it’s fine.

And to be fair, Apple has kind of had their foot off the gas with Mail for a while.

I think Apple has the right idea making Mail/Calendar/Contacts/Notes/Reminders separate apps, unlike classic monolithic Outlook, but they haven’t iterated on several of them much. (Still love Notes, and Reminders has been evolving a lot, too. But the three others…)
 
When Apple does petty ads, I don’t care for it either. Why call for it?
It isn’t just petty advertising. It brings attention that when you have a need for general purpose HPC you are probably not going to find an Apple solution. If you are doing video work where the specific algorithms used are implemented directly in hardware in the M1 you are set but you shouldn’t expect that performance to translate to a broader more general set of problems such as those that rely on like using FDTD, FEM, etc.
 
There are cables you can buy for all those fantasyland display/connection options. :rolleyes:

Orrrrrrr the computer could simply, gasp, include an HDMI port, which is hugely convenient for almost any monitor, TV, or projector.

(But also, the point wasn’t even that. It was that Apple’s chips should implement HDMI 2.1 rather than 2.0A.)
 
  • Like
Reactions: freedomlinux
Orrrrrrr the computer could simply, gasp, include an HDMI port, which is hugely convenient for almost any monitor, TV, or projector.

(But also, the point wasn’t even that. It was that Apple’s chips should implement HDMI 2.1 rather than 2.0A.)
When they list the following
  • A display engine that drives external displays.
  • Integrated Thunderbolt 4 controllers that offer more I/O bandwidth than before.
Did you ever get the impression that M1 family hardware video support was specific to a HDMI version other then the generic HDMI 2.0A/B?

Also there is some discussion on Reddit that macOS itself doesn't not allow exceeding 4k 60hz via HDMI? link
 
When they list the following
  • A display engine that drives external displays.
  • Integrated Thunderbolt 4 controllers that offer more I/O bandwidth than before.
Did you ever get the impression that M1 family hardware video support was specific to a HDMI version other then the generic HDMI 2.0A/B?

Also there is some discussion on Reddit that macOS itself doesn't not allow exceeding 4k 60hz via HDMI? link

The original post said: "hopefully the M2 family add HDMI 2.1."

I think that's a perfectly valid ask. Whether the SoC needs to implement it, or another controller in between, or the internal wiring isn't shielded enough, or whatever the limitation is.
 
  • Like
Reactions: brucemr
Orrrrrrr the computer could simply, gasp, include an HDMI port, which is hugely convenient for almost any monitor, TV, or projector.

(But also, the point wasn’t even that. It was that Apple’s chips should implement HDMI 2.1 rather than 2.0A.)
Studio Display has one Thunderbolt 3 (USB-C) port, three USB-C ports... no HDMI. Now I have a HDMI port on my Mini that I will not be using. If it was a TB 3, it is usable for me. That to me is a waste.
 
The entry level Max is priced about $250 too high and the Ultra is priced about $1500 too high for the added utility they each provide. The M1 mini is far more compelling for most use cases. An expandable upcoming Mac Pro will steal from Ultra sales.
 
The entry level Max is priced about $250 too high and the Ultra is priced about $1500 too high for the added utility they each provide. The M1 mini is far more compelling for most use cases.

If the M2 mini gains an M2 Pro option, yep.

An expandable upcoming Mac Pro will steal from Ultra sales.

Not if pricing starts way above the Mac Studio, which it will. (Otherwise, why bother having both products?)
 
  • Like
Reactions: KeithBN
eclecticlight.co has been trying to look into this. It's not easy to find a use case that can be timed, and things appear to be very much in flux, but one case that can be tested is Visual Look Up (have your Mac recognize a piece of art work, a type of flower, or whatever). He verified that this, on a "single" M1 takes about half the time it takes on a high end Intel Mac (I think he use an iMac Pro).

This gives us one datapoint, but leaves unspecified whether the task would be another 2x faster on an Ultra.

To be fair NPU is somewhat like where GPGPU was about 15 years ago (when CUDA just came out). There's an expectation that great things are possible, but also that everything is in flux at both the HW and SW levels.
Apple uses this stuff right now for photo classification, image lookup, live text, and some parts of Siri (I think both the voice analysis and the voice synthesis, but mostly not the actual "answer/task generation). What's unclear, for example, is whether they even use the NPU yet for language tasks (like translation); so much of that stuff is old code that runs on pre-NPU devices, and there's always a tension to keep that running (and backwards compatible) vs throw it away, start from scratch, and just say "Language Processing 2.0 only runs on A14/M1 and later".

A similar question could arise regarding encoding. If I have two encoder engines available in an M1 Ultra, can I encode to h.265 at higher quality? Can I even do the simpler task pf performing two such hardware encodes at once?

In one sense you can say "Of course you should be able to, anything else is dumb"; in another sense, building up ANY serious API/driver infrastructure takes time, it really does, and often the way this plays out is by the time the nicely functioning versions of all these APIs ship, it's three years after the first hardware shipped.
I mean, even something like the Live Text and Visual Lookup UI's are (let's be honest) pretty awful; they do the job but are so damn clumsy! It takes a year to get the basic tech into people's hands, then at least another year to see how people use it and figure out a better UI packaging.
Thanks for that pointer to eclectic light. When I find time, I’ll probably skim through that in more detail, but found this interesting:

I think the conclusion is that the neural engines are under utilized, which probably means they will be doubly under utilized in an Ultra. Which is a shame, because there’s a lot of power there.

I agree with your assessment that this is still early days for such tools, and the point for now is to make the hardware available so people begin using it. The benefit of having the ANE cores in the M1 is likely less for the M1 users of today and more so Using them is mainstream by the M3, or whatever.

Of course! That was basically essential before the device could ship, since it was going to be a selling feature.
And obviously this would be a goal for the NPU (and media systems) going forward. The question is whether it has been achieved today.
Based on what I read in the link above, it sounds like most third party AI work is still hitting the CPU/GPU Rather than the neural engines. If that’s true, then they are mostly being used for features in the OS and probably won’t benefit from the additional cores. The OS features are designed to run a common set of functionality fast enough to be transparent. Doubling the cores won’t help unless they’re going to make certain capabilities only available on the Ultra Studio, which they won’t.

One exception might be that someone commented that FinalCut does ML on input footage— that’s something that might benefit by running faster.

Same for the media engines— so far it doesn’t seem like tools like Handbrake are fully utilizing the media engines, but I’ll be interested to see if they eventually do.
 
I think the conclusion is that the neural engines are under utilized,

Definitely. This will improve over time especially for first-party code, but third-party code will probably always underutilize it, unless Apple provides APIs that make it very similar to competing engines on other platforms, which I don't think they're interested in. As soon as you're a third-party dev who writes apps for multiple platforms, something as highly specialized as the Neural Engine that only exists on that one platform is relatively unappealing, no matter how fast it may be.

Same for the media engines— so far it doesn’t seem like tools like Handbrake are fully utilizing the media engines, but I’ll be interested to see if they eventually do.

I believe Handbrake uses VideoToolbox where possible, and VideoToolbox in turn uses Apple's hardware encoder.
 
It isn’t just petty advertising. It brings attention that when you have a need for general purpose HPC you are probably not going to find an Apple solution. If you are doing video work where the specific algorithms used are implemented directly in hardware in the M1 you are set but you shouldn’t expect that performance to translate to a broader more general set of problems such as those that rely on like using FDTD, FEM, etc.

The M1 Max and Ultra are general purpose computers that are very fast at general purpose problems, not just things they were optimized for.

 
Every single leak from Gurman on the post M1 release rollouts has come true, along with the Jade-C core photos that clearly show the M1, M1 Pro (Jade-C Chop) M1 Max (Jade-C), M1 Ultra (Jade-2c) and one unreleased monster core labeled Jade-4C that's twice the size of the Ultra.
My worry is that’ll only close to 4x the performance but at what measure? The Ultra is not a double x jumó in performance in all measures, the majority but not all.
 
The M1 Max and Ultra are general purpose computers that are very fast at general purpose problems, not just things they were optimized for.

When you have to simulate a 3D structure (for instance an optical waveguide in my case but really any problem that requires solving differential equations with given boundary conditions) and you require a fine grid to get the solution to converge a 128GB of main memory will not get you far. If you don’t have enough main memory for the problem it isn’t that it just takes you longer to get the solution but that you crash the operating system and end up with no solution. I’ve seen this with both Windows and Macs. The only machines I’ve seen that can access more than 1 TB of main memory have Intel or IBM chips. I don’t think even AMD makes chips that can access that much memory. In addition companies that make enterprise grade software for electrical and optical engineering design and simulation develop their software to run on x86 under either Windows or Linux. Companies like Mentor Graphics, Keysight and Cadence are not going to develop their software to run natively on a Mac because the whole Mac market is too niche. So that means if you are an engineer that needs to perform that work you need a PC or the current Mac Pro.
 
  • Like
Reactions: Sf844
Forget HDMI all together and put in an extra TB/USB 4.0 port and use that. That way you get an extra useable port, instead of a port you may never use. I'm just not sure why people want to waste a port on HDMI.
To present to any tv or projector made in the last 15 years.
 
The entry level Max is priced about $250 too high and the Ultra is priced about $1500 too high for the added utility they each provide. The M1 mini is far more compelling for most use cases. An expandable upcoming Mac Pro will steal from Ultra sale
When you have to simulate a 3D structure (for instance an optical waveguide in my case but really any problem that requires solving differential equations with given boundary conditions) and you require a fine grid to get the solution to converge a 128GB of main memory will not get you far. If you don’t have enough main memory for the problem it isn’t that it just takes you longer to get the solution but that you crash the operating system and end up with no solution. I’ve seen this with both Windows and Macs. The only machines I’ve seen that can access more than 1 TB of main memory have Intel or IBM chips. I don’t think even AMD makes chips that can access that much memory. In addition companies that make enterprise grade software for electrical and optical engineering design and simulation develop their software to run on x86 under either Windows or Linux. Companies like Mentor Graphics, Keysight and Cadence are not going to develop their software to run natively on a Mac because the whole Mac market is too niche. So that means if you are an engineer that needs to perform that work you need a PC or the current Mac Pro.
if the studio is priced too high - how come the demand is so high that mine won't arrive for 12 weeks? Also - it outperforms my
 
  • Like
Reactions: SlaveToSwift
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.