Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
People can make all of the power consumption comparisons they want, in some cases representing power users Apple should want the performance is more important than power savings. This is why I've always believed Apple should take on an exclusive MacOS partner to build Macs that will interoperate with more industry standards so customers don't have to make OS level decisions/changes. Nvidia and AMD GPUs will work with ARM/AS already.
I agree. Some of us would love more power and those spending thousands on a Mac are not really that concerned about saving a small amount on electric bills.
Once you start plugging in docks, external drives, extra monitors etc, the low power claims kind of go out the window anyway.
 
  • Love
Reactions: Cape Dave
I am so on the fence about buying an M4 Max Mac Studio with Apple's extortionate 128GB RAM. Anybody running smaller LLMs on the desktop want to talk me out of it?
My first foray into LLMs was when I got the M4 Pro MacBook Pro with 24GB RAM. That worked reasonably well with smaller models (I think I even squeezed a 27B model on there once - but it was tight). I generally stuck with 10B models (6-8GB) for my story-writing brain-storming sessions. It gave me the taste for wanting something bigger.

I went for the 128GB Studio, primarily because I wanted to try out some 70B models that came it at 60-85GB depending on quant, and I also wanted to push the boundaries on context length.

Other reasons for the 128GB was because I wanted the space to be able to run LLMs *and* Stable Diffusion at the same time, without needing swap. I wanted the option to being able to run multiple LLMs at the same time (right now, I’m using an LLM through VSCode via LM Studio & Continue; and I’m learning a bit of python by writing my own minimalist GUI for llama.cpp which, naturally, requires an LLM as well).

There will be plenty of LLMs that you can run on 64GB RAM, or 48GB RAM, and even 36GB RAM will offer more breathing space than my old 24GB MacBook Pro. There are LLMs from 1GB to over 1TB. It’s like crazy town trying to decide which ones you want to look into.
 
  • Like
Reactions: splifingate
I am so on the fence about buying an M4 Max Mac Studio with Apple's extortionate 128GB RAM. Anybody running smaller LLMs on the desktop want to talk me out of it?
I’m running the laptop version of this and it’s the best modern Mac I’ve ever had, only the 6 core MP *at launch* felt in the same league, and I was running 3 RAID-0 SSDs at the time which no one was doing. It still felt inadequate once Lion launched a year or two later.

I have had a dozen macs or so between my employer providing them and personal use so I’ve got a pretty large sample size.

Go for it if you can afford it and plan to keep the machine for 3+ years, regardless of LLMs unless your workload is light. I’ve seen 60+ GB cached with some normal development work, not even getting into AI.

Everything is so fast. Keep in mind if you want to run a smaller model along with getting work done you’ll want to have equivalent system RAM for what you’re doing, Simon Willison has some recent posts about this where he’s running into issues with only 64GB.

*edit to correct Simon’s name
 
Last edited:
That makes sense, except I've got USB-C ports on my iMac that have gotten looser with time (even with new cables), and I've seen reports online of people having to replace USB-C ports on their MBP's becase they loosened. All of this suggests that USB-C also has a likely failure mode that affects the installed ports.

The only port of either type I've had issues with has been lightning on my old iPhone.

Type C when done properly, like the apple ports has (essentially) strain relief on the plug where the plastic lines up flush with the body of the device. This prevents flexing, etc. from being able to break the plug.

Been running type C for 5+ years now on laptops and never had a failure.
 
So, in terms of gaming performance — shouldn’t Cyberpunk have launched natively on macOS by now? It kinda feels like they’re delaying it on purpose, maybe to show it off at WWDC alongside a new Mac Pro with M5 Ultra or Extreme chips for bigger fps numbers to show off?

And could those new M5 Ultra / Extreme chips actually compete with something like the RTX 5090, at least in theory? Or is that way too optimistic? For the lower-end SoCs, maybe Apple just throws in their own version of DLSS and calls it a day?
 
So, in terms of gaming performance — shouldn’t Cyberpunk have launched natively on macOS by now? It kinda feels like they’re delaying it on purpose, maybe to show it off at WWDC alongside a new Mac Pro with M5 Ultra or Extreme chips for bigger fps numbers to show off?

And could those new M5 Ultra / Extreme chips actually compete with something like the RTX 5090, at least in theory? Or is that way too optimistic? For the lower-end SoCs, maybe Apple just throws in their own version of DLSS and calls it a day?

They have to add Frame Gen and Path tracing to Metal so my guess is at WWDC.
 
  • Like
Reactions: UpsideDownEclair
It’s something I’ve always “known” wasn’t a software problem (Apple’s WWDC videos make it clear, but good to better understand the technical limitations), but I’m sure people will still be talking about how eGPU’s are coming to Apple Silicon any day now. :)

The fellow that just left Ashahi was adamant about it being possible… either he wasn’t aware, or he was providing false hope to folks.
You misunderstood his message. He was only adamant that it was possible, not that it was useful. He knew the workaround he was thinking of would intrinsically limit eGPU performance so much that it made no sense for him to actually implement it. There were a million other things to work on that would actually benefit users, so playing around with eGPU wasn't a good way to allocate his time.

Honestly I think one of the things that burned Hector out was trying to communicate to the public. He liked being very open about things he discovered, which I for one really appreciated, but wasn't always the best at writing for comprehension by non-engineers. The trouble with speaking engineer to the public is that when you say "I totally could do this (but here's why it makes no sense)" you must take extreme care to overexplain and overemphasize the latter part. In fact, you must lead with it, not lead with the "totally could do it" part. Otherwise, the less technical public invariably mutates your statement into just "I can do this", which quickly morphs into "I'm going to do this", and suddenly there's a pitchfork mob demanding updates on eGPU support when you have no intent of ever starting work on it.

eGPU support is only one example, I saw this pattern play out several times, and it resulted in Hector constantly having to correct mistaken ideas about the future of Asahi on /r/asahilinux. Not the only reason for burnout by a long shot, but I think it was one.
 
So you're saying that USB-C is more easily replaceable than Lightening if it fails (since the mostly likely failure point has been moved to the plug), but is less robust and thus more likely to fail.

That makes sense, except I've got USB-C ports on my iMac that have gotten looser with time (even with new cables), and I've seen reports online of people having to replace USB-C ports on their MBP's becase they loosened. All of this suggests that USB-C also has a likely failure mode that affects the installed ports.

My guess is that he "hold" with USB-C is supposed to come from the spring-contact connection that is internal to the plug, and is relatively weak. This is supplemented by the friction/stabilization provided by the contact between the outside of the plug and the surface it contacts inside the device—this friction/stabilization may not be part of the USB standard, but it still helps. When this loosens, most of the hold is from the spring-contact connection alone, which is what makes the ports seem loose.

While all ports probably loosen with use, the USB-C ports are the only ones where the loosening has been enough for me to notice.

I've been thing on this for a minute:

I have had no real problems with the actual port; what breaks the experience is the corrosion that I seem to find on just about every Lightning plug I've ever 'owned' . . . it's like they are manufactured so that they corrode to dys-functionality in a rather short amount of time.

Port == Nice

Plug == Ew
 
The only port of either type I've had issues with has been lightning on my old iPhone.

Type C when done properly, like the apple ports has (essentially) strain relief on the plug where the plastic lines up flush with the body of the device. This prevents flexing, etc. from being able to break the plug.

Been running type C for 5+ years now on laptops and never had a failure.
Understood—your experience and mine are the opposite. I'm new to the iPhone, but I've had a Nano for several years, and it's heavily used. My only issue with its Lightning port has been from carrying it in my pocket -- lint gets into the port, preventing the plug from connecting. But that's easily fixed by removing the lint with needle-point tweezers.

More broadly, I've found the Lightning port starts with and retains a stronger, more positive connection than USB-C.
I've been thing on this for a minute:

I have had no real problems with the actual port; what breaks the experience is the corrosion that I seem to find on just about every Lightning plug I've ever 'owned' . . . it's like they are manufactured so that they corrode to dys-functionality in a rather short amount of time.

Port == Nice

Plug == Ew
Yeah, that's my experience as well. I have to replace the Nano's Lightning cable about once/year. Ironically, this means Lightning has achieved USB-C's design goal: Offloading the primary failure mode from the port to the cable!
 
Last edited:
He was only adamant that it was possible, not that it was useful.
eGPU’s, in the way that your average person might google “eGPU” and consume the first few entries to understand what they do and how they work… THAT is not possible on Apple Silicon, though. eGPU’s, the way they currently work on Intel portables is not possible on Apple Silicon. Has never been possible, and would never be possible because Apple would never make the changes required for it to be possible.

Found a thread I was following where (if that was him) he was asked if eGPU’s were possible, like, in the same way they are used connected to Windows laptops. It appears his problem is (well, WAS) that he’s very unequivocally Yes on “possible” and then gets wishy washy around whether or not it would be implemented. Even when someone provided a very clear use case that he knew was NOT true for Apple Silicon (he’d already listed reasons why it wasn’t true for Apple Silicon :D) he could not bring himself to type that that very narrow use case would not be possible. So, there was something else there beyond just poor communication and he’s probably better off away from the project.
 
eGPU’s, in the way that your average person might google “eGPU” and consume the first few entries to understand what they do and how they work… THAT is not possible on Apple Silicon, though. eGPU’s, the way they currently work on Intel portables is not possible on Apple Silicon. Has never been possible, and would never be possible because Apple would never make the changes required for it to be possible.
See, this is an example of where people like Hector (and now, myself) get into trouble. You've seemingly integrated an untrue idea into your worldview - that Apple Silicon hardware cannot talk to an eGPU at all - and that makes it very difficult to explain anything to you.

Found a thread I was following where (if that was him) he was asked if eGPU’s were possible, like, in the same way they are used connected to Windows laptops. It appears his problem is (well, WAS) that he’s very unequivocally Yes on “possible” and then gets wishy washy around whether or not it would be implemented. Even when someone provided a very clear use case that he knew was NOT true for Apple Silicon (he’d already listed reasons why it wasn’t true for Apple Silicon :D) he could not bring himself to type that that very narrow use case would not be possible. So, there was something else there beyond just poor communication and he’s probably better off away from the project.
Apple Silicon hardware can in fact talk to a an eGPU. Discrete GPUs are just PCIe devices, and AS support for PCIe is fully generic. It can map any PCIe device's collection of BARs (base address registers, descriptors for the card's memory-mapped IO resources) into Apple Silicon's physical memory address space.

The one thing Apple Silicon can't do with these MMIO resources (that x86 PCs can) is allow its CPUs to perform misaligned accesses to PCIe device memory. If you don't know what a misaligned access is, it's reading or writing an object of size N bytes (where N is a power of 2) when the address of the object in memory is not divisible by N.

For nearly all types of PCIe device, that's fine, you never needed misaligned accesses anyways. But when trying to talk to AMD and NVidia GPUs, misaligned accesses become important. Mind you, they do not need to be important. You can always structure data in GPU memory as fully aligned, and the GPU can work with that. However, there's a very large library of software out there written against AMD and Nvidia GPUs (both drivers and application software) which assumes it can treat mapped GPU memory just like regular RAM, implying that misaligned access is available and costs next to nothing. That's where the problems come from.

This is why, when you ask someone like Hector "is it possible?", he is going to say yes. That really is the only honest answer. Effectively, it is "just" a software problem: if you could fix every bit of software that ever touches PCIe GPU memory to always align every access, it would all work great and performance would be good.

So sure, you can find plenty of things Hector wrote that you see as "wishy washy" and evasive, but are really just him trying to explain that while it's technically possible, it's an enormous task for a small team. Worse, it would touch lots of projects they'd have difficulty getting to go along with it (including but not limited to closed-source software). Or sometimes he talked about workarounds he could imagine which don't require patching everything in sight, but they'd have a performance cost that probably wouldn't be acceptable.
 
Yeah, that's my experience as well. I have to replace the Nano's Lightning cable about once/year. Ironically, this means Lightning has achieved USB-C's design goal: Offloading the primary failure mode from the port to the cable!
Lightning has a primary failure mode in the port - there are spring contacts in it, and like all springs, over enough insertion cycles, they will fatigue enough to fail. USB-C moves all essential spring fatigue failures into the cable.

(The springs in the iMac's USB-C ports are not required by spec, I presume Apple put them in just to provide a little bit more retention. The main retention feature is the ears at the end of the 'tongue' inside the port, which spring clips inside the cable connector latch on to. Those ears can of course get abraded away over time, but really there's no such thing as a connector design which moves all failure modes to one side or the other, so you gotta pick your poison.)
 
  • Like
Reactions: Macintosh IIcx
Lightning has a primary failure mode in the port - there are spring contacts in it, and like all springs, over enough insertion cycles, they will fatigue enough to fail. USB-C moves all essential spring fatigue failures into the cable.
Yes, that's correct, but I'm afraid you've misunderstood my post. It's precisely because (at least based on where the springs are) Lightning has its primary expected failure mode in the port, that I commented to splifingate it's ironic we've both experienced the opposite (our Lightning plugs repeatedly failing while the ports remain fine, which is instead the primary expected failure mode of USB-C).

I.e., if someone says it's ironic that A happens instead of B, that means they understand B is the expected outcome, so there's no need to explain the latter to them.
 
  • Like
Reactions: splifingate
Unfortunately i see people who wants more from Apple in the near/distance future..but i think we will get less..probably mac pro and mac studio will be erased..5 years from now i dont think we will have those or something similar to replace them
 
that Apple Silicon hardware cannot talk to an eGPU at all
No, I didn’t say it can’t talk to an eGPU at all :) What I’ve described is a very specific use case, a “usefulness” case if you will, that Hector has described is not possible and that another forum member has defined why it’s not possible and other sites on the web also indicate is not possible. All I did was reiterated the relevant very narrow use case (not “it is not possible”, but “this VERY specific use case, which is known to not be possible, is not possible”) and replaced “not possible” with “impossible”.

This is why when you ask someone like Hector “The way you can take an eGPU and slot it into an Intel based Mac and have it take on display duties, is that possible with Apple Silicon”?, they read “eGPU” “possible” “Apple Silicon” and say “yes” regardless of what it means in the context of the very specific use case described. I hope that not being in a position to have to communicate for the project has been a welcome change for him.
 
Unfortunately i see people who wants more from Apple in the near/distance future..but i think we will get less..probably mac pro and mac studio will be erased..5 years from now i dont think we will have those or something similar to replace them

I can see the Mac Pro (and I can also see it being retained in a renewed way) but the Studio? I don’t see that one. Are you also expecting the Ultra to go away?
 
I can see the Mac Pro (and I can also see it being retained in a renewed way) but the Studio? I don’t see that one. Are you also expecting the Ultra to go away?
Yes, ultra to go away ..Apple cares about profits and the ultra is not it
I wouldnt be surprised the next desktop mac mini and mac studio to be merged..to offer m/m pro and m max
 
FWIW the lack of eGPU on Apple Silicon isn't a software problem

It is a software problem. You need new drivers that are compatible with Apple's PCIe mapping modes.

What we have here is a basic failure of comprehension. The current discussion around the interpretation of Hector's statements is a prime example. It is possible to write eGPU drivers for Apple Silicon, but the effort would be enormous and the payout would be minuscule, so why bother?

I agree, but don’t doubt that there will still be those that expect to see eGPU’s show up on Apple Silicon “any day now”.

It's because people have no patience or interest in learning about relevant details. We still have a surprisingly large crowd that insists that Bootcamp on Apple Series Macs would be trivial for Apple to do.
 
It is possible to write eGPU drivers for Apple Silicon, but the effort would be enormous and the payout would be minuscule, so why bother?
The problem though, comes in being very emphatic about the “Yes” and not following up with a similarly emphatic “No” on if it’s going to be done. Both honest AND sets realistic expectations. It takes a very specific type of communicator that’s confident communicating on both counts, though, and that’s why there’s not a lot of them. Stepping away was likely a very positive change for him.
 
The problem though, comes in being very emphatic about the “Yes” and not following up with a similarly emphatic “No” on if it’s going to be done. Both honest AND sets realistic expectations. It takes a very specific type of communicator that’s confident communicating on both counts, though, and that’s why there’s not a lot of them. Stepping away was likely a very positive change for him.

I strongly feel that Hector's comment on the eGPUs has been misinterpreted. I never felt that his message was an "emphatic yes". To me it was a technical commentary on a technical topic. Maybe Hector could have been more careful with his wording. But then maybe the gamer/linux influencers could also be a bit more responsible about understanding the content. We are not politicians, I feel like we should not be under extreme scrutiny just because some influencers misunderstood or misrepresented what we say.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.