Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Is Apple customer service able to see where an order is when it's "Shipping to Store"? I know the buyer can't, but was hoping they could.
 
Mine is now at "preparing to ship".

I'm quite excited. This will be the first truly "overpowered" Mac I've purchased since the first-generation Xeon-powered Mac Pro way. way back in the day. That machine lasted me a very long time. Jammed in tons of RAM and a whole bunch of fast HHDs.

Since then, my purchases have been more reasonable and not over-done for the requirements I've had at the time, which probably resulted in more frequent updates. Which, to be honest, is still my advice to most people. The 16/40-core M4 Max, being basically as fast (faster, in some respects) as the M2 Ultra, is almost certainly more than I need, but will still be a blast to use. I might just have to invent new work for myself to use all the power available!
 
I ordered the M3Ultra/256GB/4TB on March 8th. Still in processing status. Apple says delivery Mar 19-21.

My Studio order switched to Preparing to Ship overnight. Hoping I have it Wednesday or Thursday.

My order switched to "Shipped" overnight. It is in Chek Lap Kok, HK right now. UPS shows a delivery date of Wednesday.
 
Mine is coming through FedEx. It took 2 days to get from China to Memphis to Portland. And it is looking like it will take 4 days to get 20 miles from Portland to my house. I hate FedEx sometimes. The package just sits for more than 24 hours in a location with no updates.

They promised Wednesday the 19th and FedEx is going to deliver it then if though it made it to Portland on the 15th.
 
I hate when reviewers use this word. Building my PC, "binned" meant I was selecting the highest scoring CPU, actually tested to have the best overclocking outcomes. In Apple land it means a CPU with a lower core count than the other that is on offer. In truth all chips are binned in order to be used appropriately and minimize yield loss. So I also find the use of the word a bit comical !
This may be obvious, but I'm going to point it out anyway since there could be people that aren't aware of why this term upsets people. In US English "bin" does not refer to a trash can or rubbish receptacle like UK English. It's just a general storage container, so the process of binning a CPU just means putting it into the correct container with other chips that have the same number of cores active. In this context binning just means grouping the common chips together.

If they're selling multiple parts from be same production line based on how many cores passed validation and can stay enabled, both the good ones and lesser ones are still getting grouped or binned regardless. To me it makes sense that they're calling out the lesser models as the binned ones because in an ideal world all of the cores would be perfect 100% of the time and they wouldn't have to offer different models to use up some of the ones that were partially successful. Note to anyone unfamiliar with the process, the chips are designed with this problem in mind, so my M3 Ultra could potentially have only a few failed cores but they'll disable a fixed amount so that they're not selling a bunch of random variants.
 
  • Like
Reactions: rb2112 and b17777
I got a notice yesterday that my Studio would be delivered on Friday! Wahooo!

Got a notice this morning that it would be delivered today! 🥳 First time ever FedEx has delivered early!

It's arrived! Transferred my data from my MacBook Pro to the Studio via Migration Assistant and a Thunderbolt 4 cable (fast, man!).

This little guy is, for lack of a better word, snappy! Everything happens instantaneously. It's really remarkable. My old machine was no slouch, but I'm super happy to have a desktop again (and not just a laptop pretending to be a desktop). I'm glad I didn't cave in a few months ago and buy the Mac mini M4 Pro when I was thinking about doing so.
 
That sounds nuts to me. It’s exactly what I had planned to do too.
Still no luck after my latest attempt. I was told by a chat rep the phone reps have a "different ordering system" that should be able to give me more information. That was false, the phone rep ordered via the website more or less just like I've done. They got the same vague error message and were not able to give me any insight as to why it keeps rejecting the order. I'm seriously at a loss as to why an in store pickup combined with gift card isn't allowed at all.
 
I ordered March 15, around 5am AZ time, it processed literally within 20 minutes, got shipping notice within 3 hours! Was promised March 26, 2025. Got it today, march 19! Ridiculously fast!

M4M 128GB/1TB model. My very first Mac Studio - placed order for a base M3Ultra but canceled. Instead got this. Too slow for inference anyway so I was not going to go more RAM on the M3Ultra I figured instead get the SME supported M4Max 16c/40c instead since it's superior in single thread performance over the M3U.

I gotta say packaging is like legit. Never thought you can do it this way without styrofoam or some other poly foam. Impressed! Even the cord is high quality.
 

Attachments

  • IMG_0939.jpg
    IMG_0939.jpg
    58.3 KB · Views: 43
  • IMG_0967.jpg
    IMG_0967.jpg
    95.7 KB · Views: 43
  • IMG_0943.jpg
    IMG_0943.jpg
    65.6 KB · Views: 37
  • IMG_0953.jpg
    IMG_0953.jpg
    76.2 KB · Views: 35
Last edited:
  • Like
Reactions: mdhaus72
I ordered March 15, around 5am AZ time, it processed literally within 20 minutes, got shipping notice within 3 hours! Was promised March 26, 2025. Got it today, march 19! Ridiculously fast!

M4M 128GB/1TB model. My very first Mac Studio - placed order for a base M3Ultra but canceled. Instead got this. Too slow for inference anyway so I was not going to go more RAM on the M3Ultra I figured instead get the SME supported M4Max 16c/40c instead since it's superior in single thread performance over the M3U.

I gotta say packaging is like legit. Never thought you can do it this way without styrofoam or some other poly foam. Impressed! Even the cord is high quality.

Ok, ran initial test qwq32 4bit, using my m3 max 64GB memory It was slow. I mean some 10t/s. This one is MUCH FASTER. Both systems I am using MLX to run I don't use LM studio/ollama. Not only that gives me plenty of room to hold all my web pages and other work while still being able to inference. The biggest issue was not enough ram when you have 30-40 pages of web material up, background apps running, etc. just ran out of memory in the red with swap loading all the time. Just wasn't cutting it but I'm going to tell you that if you're going to do AI stuff don't bother with apple. I have 2x 3090 PC and it will smoke even the M3U but the PC will put nasty amount of heat and use up a lot of electricity. I got this Mac Studio because it will be my primary desktop system and my goodness, apple is just bar none spectacular with efficiency. Just not suited at the moment for big models. Even if you can load big LLM's, it's too slow! So instead opt for max memory, run some smaller LLM's and enjoy the efficiency, quietness. This is a real home run for a desktop system. M4Max is just amazing! Fine tuning will be much nicer!

There is heat generated - the back blows out WARM air the top is cool to touch but the corners are warm to touch. Overall I don't hear the fans and my NAS is running in the BG louder than everything else. Just a fantastic desktop system!
 

Attachments

  • Untitled 5.jpg
    Untitled 5.jpg
    302.6 KB · Views: 34
  • Untitled 6.jpg
    Untitled 6.jpg
    173.6 KB · Views: 32
Ok, ran initial test qwq32 4bit, using my m3 max 64GB memory It was slow. I mean some 10t/s. This one is MUCH FASTER. Both systems I am using MLX to run I don't use LM studio/ollama. Not only that gives me plenty of room to hold all my web pages and other work while still being able to inference. The biggest issue was not enough ram when you have 30-40 pages of web material up, background apps running, etc. just ran out of memory in the red with swap loading all the time. Just wasn't cutting it but I'm going to tell you that if you're going to do AI stuff don't bother with apple. I have 2x 3090 PC and it will smoke even the M3U but the PC will put nasty amount of heat and use up a lot of electricity. I got this Mac Studio because it will be my primary desktop system and my goodness, apple is just bar none spectacular with efficiency. Just not suited at the moment for big models. Even if you can load big LLM's, it's too slow! So instead opt for max memory, run some smaller LLM's and enjoy the efficiency, quietness. This is a real home run for a desktop system. M4Max is just amazing! Fine tuning will be much nicer!

There is heat generated - the back blows out WARM air the top is cool to touch but the corners are warm to touch. Overall I don't hear the fans and my NAS is running in the BG louder than everything else. Just a fantastic desktop system!

You sound like me, having multiple computers. I have a win10 pro desktop for a home server, a ‘mega rig’ for gaming (5950x, 64gb ram, 4090 FE), and an aged iMac 27 inch I’m waiting to replace any day now.

I’d love a new m4 studio though the truth is I don’t do anything high end, just basically photography. I just like toys and always go high end.

When I hear people talking about ai this and llm that, I have no idea what they’re referring to in a home environment.
 
Ok, ran initial test qwq32 4bit, using my m3 max 64GB memory It was slow. I mean some 10t/s. This one is MUCH FASTER. Both systems I am using MLX to run I don't use LM studio/ollama.

Thanks for your informative post. I am also looking at running models around the same size, since as they get larger the performance deteriorates too quickly. Can you give an estimate of how many tokens/sec you were getting out of your M4 Max running the same size model?

I have been on the fence of getting a base Ultra, or the same spec M4 Max as you (128GB/1TB SSD). When comparing a M3 Max and M4 Pro with way less GPU cores, the M4 Pro was keeping up pretty good on MLX, and I was wondering if it could be the Arm V9/SME difference.

I'm actually away right now, and when I get back I will be pulling the trigger on a Mac Studio - but I can't get over the fact that the M4 Max in that configuration is so close to the M3 Ultra. Any other year, I would go for the Ultra - but this year its quite the dilemma.
 
  • Like
Reactions: JamieLannister
Thanks for your informative post. I am also looking at running models around the same size, since as they get larger the performance deteriorates too quickly. Can you give an estimate of how many tokens/sec you were getting out of your M4 Max running the same size model?

I have been on the fence of getting a base Ultra, or the same spec M4 Max as you (128GB/1TB SSD). When comparing a M3 Max and M4 Pro with way less GPU cores, the M4 Pro was keeping up pretty good on MLX, and I was wondering if it could be the Arm V9/SME difference.

I'm actually away right now, and when I get back I will be pulling the trigger on a Mac Studio - but I can't get over the fact that the M4 Max in that configuration is so close to the M3 Ultra. Any other year, I would go for the Ultra - but this year its quite the dilemma.


Unfortunately I wouldn’t even call it a dilemma. The ultra is…. Not cool this year. Seems like a 0.5 upgrade from the M2 Ultra. Not sure why they didn’t do the m4 ultra this time.
 
  • Like
Reactions: JamieLannister
Thanks for your informative post. I am also looking at running models around the same size, since as they get larger the performance deteriorates too quickly. Can you give an estimate of how many tokens/sec you were getting out of your M4 Max running the same size model?

I have been on the fence of getting a base Ultra, or the same spec M4 Max as you (128GB/1TB SSD). When comparing a M3 Max and M4 Pro with way less GPU cores, the M4 Pro was keeping up pretty good on MLX, and I was wondering if it could be the Arm V9/SME difference.

I'm actually away right now, and when I get back I will be pulling the trigger on a Mac Studio - but I can't get over the fact that the M4 Max in that configuration is so close to the M3 Ultra. Any other year, I would go for the Ultra - but this year its quite the dilemma.

I need to pull up my old code base that runs a timer this QWQ32 does not take verbose parameter so I cannot see actual token/s but I'll say that the older M3 max I had with 64GB memory was not fast but not too slow - more like "annoying" - so that one ran around 10T/s to about 8T/s. Anything below 10T/s to me is just annoyingly slow. Does it still work, yes of course but it's just too slow for even my kids to query. The M4Max is much faster. But let me get all my old stuff and put it back on this new machine. I also picked up a new MBpro 12c/16c 512GB base since I do not use the M3 MAX laptop anymore (sold it off!) - the new machine is just a dream. Let me digress:

Apple's M4 lineup is just so awesome - I had my M1 Pro, M1 MAX (14/16 inch) - the 14 M1 Pro had fan noise at basic tasks, so I upped to a 16 inch then used it for a long while SITTING ON A DESK PLUGGED IN FOREVER. I waited until the M3 MAX 16-inch and thought this is the one! Then it spent 18 months on a desk, lol, yet again. I bought an M3 13.6" and let me tell you it's just so great the size and weight. I used that laptop more than I did my M3 MAX when I was around the house, outdoors getting my kids, etc. Until recently I sold the M3 MAX because it's just plugged in all the time I wanted a desktop. The New Mac studios were out and I just wanted more power in a Mac. So I ordered the M3U base config but decided NO after some LLM tests which proves it's just not fast enough compute at this moment in time and even with such a huge 512GB memory upgrade it makes no sense to use a Mac the inference is just terribly slow.

anyway, decided on the M4 Max with 128GB memory because using my 64GB M3 MAX, even with this QWQ32 4-bit it ate so much memory I ran out for all the other tasks on the laptop. It was actually struggling! but now the 128GB memory has enough room for a small LLM and all the tasks I need to run in the background (Xcode, visual studio code, affinity photo, etc).

MLX is optimized for sure that's why I'm getting way better inference response with this small llm. The M4 is NEW so the SME extension is now included and used automatically with updated compilers, etc. The older one AMX on the M3 and prior.

Honestly this is an expensive purchase for me to justify (for a MAC system) - PC you can game at least and I got 2x 3090 gpus which is great but man the heat and the electricity is just not good. My small 100sq ft gaming room heats up so warm that I sweat when gaming CoD. ridiculous. But that's PC for ya. I don't use iPads anymore so I opted for a desktop this time around and I am going to keep it for a long while. It is the perfect desktop machine for me because it's so efficient and quiet and just does everything I want plus more. Well AI/ML engineering stuff I do still run on the PC for the sheer compute power but the Mac Studio is just amazing for everything else not AI.

The 14inch MBPro 12c is so perfect the size is perfect for me - the MBAir is better form factor but it's not that bad for me to go up to a 14inch. So if you're on the fence to get a Mac Studio get the M4 MAX even the base config but up the ram. No one needs 128GB memory for day to day stuff it's this niche AI LLM stuff that requires it. The M3Ultra has its place but man the price does not work for me and its old architecture is outdated.

I'll get a real token/s number soon. The M5 will be a slight upgrade but until they really shrink the node on the M6 I don't know if you're gonna get such substantial performance gains going from M3 -> M4. Just the single core performance is worth it. Not many apps use multicore so single core performance is the major factor in how snappy everything runs.
 
  • Like
Reactions: OldMike
You sound like me, having multiple computers. I have a win10 pro desktop for a home server, a ‘mega rig’ for gaming (5950x, 64gb ram, 4090 FE), and an aged iMac 27 inch I’m waiting to replace any day now.

I’d love a new m4 studio though the truth is I don’t do anything high end, just basically photography. I just like toys and always go high end.

When I hear people talking about ai this and llm that, I have no idea what they’re referring to in a home environment.

PC's still rule and rocks. If you want gaming either buy a console or pc gaming. I'm more on the side of console these days because the PC games are just so unoptimized I pay thousands for a CPU/GPU setup to game at high fidelity and there are problems (but they do get fixed eventually) - then you got all these fake frames these days is all the hoopla. I don't do anything but AI work on PC's these days but now with this new Mac Studio I can still run small AI models and do everything else on the Mac Studio.

You can use any M processor to task it for photography/video exporting with the dedicated encoders. The most amazing thing about these processors from apple is the efficiency - they are just bar none the best.

The AI stuff is just experimental for home use. It's just prudent to pay for online AI instead. Don't spend thousands on a computer just to do this at home unless you must have privacy and you just like having a model run on your own hardware to test/fine tune, etc. You can pay for Grok, chatGPT, Claude, for years and still come out cheaper than buying a full M3U Mac Studio w/512GB memory! M4 Ultra would be insane. But you know apple has plans so maybe end of year, next year you have M5 ultra instead.

Bottom line is the M4 lineup is just so good. You cannot go wrong with ANY M4 configuration. And my biggest reason for getting this is that I do not have to worry about battery SoH at all. It's always plugged in and sipping on <7 watts power idling. I can run anything for hours/days (fine tuning LLM's) - and not worry about destroying my battery. And now I can just get a smaller laptop for light work/email, etc. while the desktop does the heavy lifting.

Yea apple is king of marketing - they are so good you almost can't get what YOU want. You're always going buy what THEY steer you to buy - to get more memory, you have to up the processor, etc. They are the best at doing this.

Anyway, too much digressing - I ordered my Mac Studio CTO on a Saturday morning, got it the following Wednesday and that's some ultra fast shipping (it was shipped to Apple Store, however!)
 
I need to pull up my old code base that runs a timer this QWQ32 does not take verbose parameter so I cannot see actual token/s but I'll say that the older M3 max I had with 64GB memory was not fast but not too slow - more like "annoying" - so that one ran around 10T/s to about 8T/s. Anything below 10T/s to me is just annoyingly slow. Does it still work, yes of course but it's just too slow for even my kids to query. The M4Max is much faster. But let me get all my old stuff and put it back on this new machine. I also picked up a new MBpro 12c/16c 512GB base since I do not use the M3 MAX laptop anymore (sold it off!) - the new machine is just a dream. Let me digress:

Apple's M4 lineup is just so awesome - I had my M1 Pro, M1 MAX (14/16 inch) - the 14 M1 Pro had fan noise at basic tasks, so I upped to a 16 inch then used it for a long while SITTING ON A DESK PLUGGED IN FOREVER. I waited until the M3 MAX 16-inch and thought this is the one! Then it spent 18 months on a desk, lol, yet again. I bought an M3 13.6" and let me tell you it's just so great the size and weight. I used that laptop more than I did my M3 MAX when I was around the house, outdoors getting my kids, etc. Until recently I sold the M3 MAX because it's just plugged in all the time I wanted a desktop. The New Mac studios were out and I just wanted more power in a Mac. So I ordered the M3U base config but decided NO after some LLM tests which proves it's just not fast enough compute at this moment in time and even with such a huge 512GB memory upgrade it makes no sense to use a Mac the inference is just terribly slow.

anyway, decided on the M4 Max with 128GB memory because using my 64GB M3 MAX, even with this QWQ32 4-bit it ate so much memory I ran out for all the other tasks on the laptop. It was actually struggling! but now the 128GB memory has enough room for a small LLM and all the tasks I need to run in the background (Xcode, visual studio code, affinity photo, etc).

MLX is optimized for sure that's why I'm getting way better inference response with this small llm. The M4 is NEW so the SME extension is now included and used automatically with updated compilers, etc. The older one AMX on the M3 and prior.

Honestly this is an expensive purchase for me to justify (for a MAC system) - PC you can game at least and I got 2x 3090 gpus which is great but man the heat and the electricity is just not good. My small 100sq ft gaming room heats up so warm that I sweat when gaming CoD. ridiculous. But that's PC for ya. I don't use iPads anymore so I opted for a desktop this time around and I am going to keep it for a long while. It is the perfect desktop machine for me because it's so efficient and quiet and just does everything I want plus more. Well AI/ML engineering stuff I do still run on the PC for the sheer compute power but the Mac Studio is just amazing for everything else not AI.

The 14inch MBPro 12c is so perfect the size is perfect for me - the MBAir is better form factor but it's not that bad for me to go up to a 14inch. So if you're on the fence to get a Mac Studio get the M4 MAX even the base config but up the ram. No one needs 128GB memory for day to day stuff it's this niche AI LLM stuff that requires it. The M3Ultra has its place but man the price does not work for me and its old architecture is outdated.

I'll get a real token/s number soon. The M5 will be a slight upgrade but until they really shrink the node on the M6 I don't know if you're gonna get such substantial performance gains going from M3 -> M4. Just the single core performance is worth it. Not many apps use multicore so single core performance is the major factor in how snappy everything runs.

Thank you so much for another great post. We use pretty much the same apps (though I lean towards JetBrains for IDE) and I have pretty close to the same use case. You answered a lot of my questions and have pushed me towards the M4 Max now.

I was having a tough time deciding because with the Microcenter M3 Ultra deal for $3399, it is actually $300 cheaper than the M4 Max with 128GB/1TB SSD.

I just refreshed the page before posting and saw your last post about gaming. I ended up doing the same thing and dedicated two of my Xeon workstations into a rack setup. One of them had a GPU for gaming, but I ended up deciding to just go console for the same reasons.

Thanks again. I really appreciate all of you insights.
 
  • Like
Reactions: JamieLannister
Just wasn't cutting it but I'm going to tell you that if you're going to do AI stuff don't bother with apple. I have 2x 3090 PC and it will smoke even the M3U but the PC will put nasty amount of heat and use up a lot of electricity. ... apple is just bar none spectacular with efficiency. Just not suited at the moment for big models.
Which raises the option: buy a Mac (Studio or Mini) and then by an Nvidia DGX Sparks (fka "Digits") and connect the two (over 10GBe, or USB4?) Exo can work with heterogeneous systems (correct?)

I guess the apple of every LLM nerd's eye is the new DGX Workstation announced, but I bet one of those will cost upwards of $25K and heat a whole house. To say nothing of the noise.
 
Thank you so much for another great post. We use pretty much the same apps (though I lean towards JetBrains for IDE) and I have pretty close to the same use case. You answered a lot of my questions and have pushed me towards the M4 Max now.

I was having a tough time deciding because with the Microcenter M3 Ultra deal for $3399, it is actually $300 cheaper than the M4 Max with 128GB/1TB SSD.

I just refreshed the page before posting and saw your last post about gaming. I ended up doing the same thing and dedicated two of my Xeon workstations into a rack setup. One of them had a GPU for gaming, but I ended up deciding to just go console for the same reasons.

Thanks again. I really appreciate all of you insights

Dude, here's another thought before I log off - I know the feeling brother. You pay up the fanny for an M3U for example. Next year there is another model and you say to yourself man, my inference speeds would increase! lol, I tell you from past wasteful "trading in" experiences that it's really not worth chasing it when buying a MAC. Honestly these machines since the M-series debut are so good they will work fine (just wait a little longer is ok).

The M3U has its purpose but that purpose diminishes big time because you have options to go up higher for memory but the compute power is not there so it's moot. Sure my own 3090 memory is nothing compared to the M3U but at least the compute power is there and significantly faster too. You run big LLM (at this moment in time 2025) - you don't have compute power it makes no sense to go with M3U or a MAC anyway. You have all those extra cores be it the 60C or 80C but for everyday use it's overkill and not to forget it's still one generation older architecture. At least with M4Max you're probably not going to see such a huge gain in single core performance when the M5 comes out. M6 is a different story, however.

I've always chased the "future proof" path but let me tell you it's bxllsxxt. I got rid of my M3 Max 64GB MacBook Pro because I thought it'll be all I need and 18 months later, see ya! Just get what you can afford now and use it and don't even sweat the next model it will for sure be better. I just need something to run the smaller LLM's to experiment with and still have memory left over for other tasks - a lot of people tell you you can run such and such size LLM to fit in memory but they don't tell you the entire overall UI experience sucks if you're cutting it close - you still got apps and websites and books, etc. all loaded and you still want to use the system for other things when inference is running too. And if you have 64GB memory it's way too close for comfort. I ran that QWQ32 4-bit model and entire system was around 80GB. Still leaves me plenty of room for everything else! But you know apple - can't do 96GB, gotta go 128GB instead.

Look at the M4 MAX. Don't get the M3U if you're still wanting to do AI models. Apple will most likely catch up gpu compute power wise sooner than anyone thinks. Just not right now. Good luck!

PS - what's amazing is that the M4MAX in a laptop his really no different than on the Mac studio. It's the exact same processor in the Mac Studio and of course the thermal headroom is superior in the Mac Studio but the laptop does the same thing the Mac Studio does. But I opted not for another laptop because you will need the 16 inch sized and due to battery health I don't want to run it so maxed out. Also the fact that I don't go anywhere lugging the big 16incher around either. The desktop is significantly cheaper too!
 
  • Like
Reactions: OldMike
Which raises the option: buy a Mac (Studio or Mini) and then by an Nvidia DGX Sparks (fka "Digits") and connect the two (over 10GBe, or USB4?) Exo can work with heterogeneous systems (correct?)

I guess the apple of every LLM nerd's eye is the new DGX Workstation announced, but I bet one of those will cost upwards of $25K and heat a whole house. To say nothing of the noise.

I am waiting for the DIGITS as well. But until we see what it can do, we wont' know! It's not going to be $3k MSRP. I don't think I've purchased anything NVIDIA's at MSRP these days lol. Memory bandwidth wise on paper it's weaker than the M4 MAX. But then again I can do so much more with the Mac Studio too. Yea the heat and electricity for all this AI stuff is really quite crazy. It's like the bitcoin rush, lol.

anyway, for big AI stuff I still pay subscription for online AI. Superior performance as well. But I think apple hardware will close the gap soon. They are behind now but they are catching up!
 
I ordered March 15, around 5am AZ time, it processed literally within 20 minutes, got shipping notice within 3 hours! Was promised March 26, 2025. Got it today, march 19! Ridiculously fast!

M4M 128GB/1TB model. My very first Mac Studio - placed order for a base M3Ultra but canceled. Instead got this. Too slow for inference anyway so I was not going to go more RAM on the M3Ultra I figured instead get the SME supported M4Max 16c/40c instead since it's superior in single thread performance over the M3U.

I gotta say packaging is like legit. Never thought you can do it this way without styrofoam or some other poly foam. Impressed! Even the cord is high quality.
I bought a base version that I returned today...I ordered a step-up version that is supposed to arrive on Monday (it was originally Friday, but fell victim to a plane delay in Alaska).

Is that a disc drive add-on on the bottom of your Studio? If so, which one?
 
I bought a base version that I returned today...I ordered a step-up version that is supposed to arrive on Monday (it was originally Friday, but fell victim to a plane delay in Alaska).

Is that a disc drive add-on on the bottom of your Studio? If so, which one?
sorry it's an intel core duo 2 Mac mini! It's not used at all for anything but just used to raise the Mac studio off my desk due to dust. I got some kind of $19 plastic filter tray/stand off amazon coming tomorrow. I was so surprised it's the exact footprint of the Mac mini (pre-m4). I just have a bunch of TB4 docks I am using now. I am interested in that Sandisk Pro extreme external drive they released. 3300MB/s speeds. USB4.
 
The digits will be used for training and fine tuning. It is likely too slow for inference. The Mac is too slow for inference on anything more than 32B parameters.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.