Was wondering how many people here are experimenting with local AI LLM models and AI Agents and such.
I saw that they claimed AI processing is up to 4x faster on the M5 chip than the M4. I wonder how it is in real world usage, and how much of an impact it makes on the overall experience of using these agents.
I am considering moving from my M4 Air to an M5. My main motivation is that I made a mistake in getting 512GB when in reality I need a lot more space for my needs, so was considering going with 2TB this time.
But knowing how much better the AI experience can also help me justify the move. Otherwise, might just wait until next year and suffer a bit this year with the lower space.
I saw that they claimed AI processing is up to 4x faster on the M5 chip than the M4. I wonder how it is in real world usage, and how much of an impact it makes on the overall experience of using these agents.
I am considering moving from my M4 Air to an M5. My main motivation is that I made a mistake in getting 512GB when in reality I need a lot more space for my needs, so was considering going with 2TB this time.
But knowing how much better the AI experience can also help me justify the move. Otherwise, might just wait until next year and suffer a bit this year with the lower space.